mirror of
https://github.com/elastic/logstash.git
synced 2025-04-19 04:15:23 -04:00
Doc: Remove plugin docs from logstash core (#17405)
Co-authored-by: Colleen McGinnis <colleen.mcginnis@elastic.co>
This commit is contained in:
parent
add7b3f4d3
commit
e2c6254c81
239 changed files with 184 additions and 53560 deletions
|
@ -5,7 +5,7 @@ cross_links:
|
|||
- ecs
|
||||
- elasticsearch
|
||||
- integration-docs
|
||||
- logstash-docs
|
||||
- logstash-docs-md
|
||||
- search-ui
|
||||
toc:
|
||||
- toc: reference
|
||||
|
|
|
@ -402,7 +402,7 @@ With these both defined, the install process will search for the required jar fi
|
|||
|
||||
## Document your plugin [_document_your_plugin_2]
|
||||
|
||||
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://reference/integration-plugins.md).
|
||||
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs-md://vpr/integration-plugins.md).
|
||||
|
||||
See [Document your plugin](/extend/plugin-doc.md) for tips and guidelines.
|
||||
|
||||
|
|
|
@ -403,7 +403,7 @@ With these both defined, the install process will search for the required jar fi
|
|||
|
||||
## Document your plugin [_document_your_plugin_3]
|
||||
|
||||
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://reference/integration-plugins.md).
|
||||
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs-md://vpr/integration-plugins.md).
|
||||
|
||||
See [Document your plugin](/extend/plugin-doc.md) for tips and guidelines.
|
||||
|
||||
|
|
|
@ -443,7 +443,7 @@ With these both defined, the install process will search for the required jar fi
|
|||
|
||||
## Document your plugin [_document_your_plugin]
|
||||
|
||||
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://reference/integration-plugins.md).
|
||||
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs-md://vpr/integration-plugins.md).
|
||||
|
||||
See [Document your plugin](/extend/plugin-doc.md) for tips and guidelines.
|
||||
|
||||
|
|
|
@ -172,7 +172,7 @@ Finally, we come to the `filter` method that is invoked by the Logstash executio
|
|||
|
||||
In the example above, the value of the `source` field is retrieved from each event and reversed if it is a string value. Because each event is mutated in place, the incoming `events` collection can be returned.
|
||||
|
||||
The `matchListener` is the mechanism by which filters indicate which events "match". The common actions for filters such as `add_field` and `add_tag` are applied only to events that are designated as "matching". Some filters such as the [grok filter](/reference/plugins-filters-grok.md) have a clear definition for what constitutes a matching event and will notify the listener only for matching events. Other filters such as the [UUID filter](/reference/plugins-filters-uuid.md) have no specific match criteria and should notify the listener for every event filtered. In this example, the filter notifies the match listener for any event that had a `String` value in its `source` field and was therefore able to be reversed.
|
||||
The `matchListener` is the mechanism by which filters indicate which events "match". The common actions for filters such as `add_field` and `add_tag` are applied only to events that are designated as "matching". Some filters such as the [grok filter](logstash-docs-md://lsr/plugins-filters-grok.md) have a clear definition for what constitutes a matching event and will notify the listener only for matching events. Other filters such as the [UUID filter](logstash-docs-md://lsr/plugins-filters-uuid.md) have no specific match criteria and should notify the listener for every event filtered. In this example, the filter notifies the match listener for any event that had a `String` value in its `source` field and was therefore able to be reversed.
|
||||
|
||||
|
||||
### getId method [_getid_method_3]
|
||||
|
|
|
@ -360,7 +360,7 @@ With these both defined, the install process will search for the required jar fi
|
|||
|
||||
## Document your plugin [_document_your_plugin_4]
|
||||
|
||||
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://reference/integration-plugins.md).
|
||||
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs-md://vpr/integration-plugins.md).
|
||||
|
||||
See [Document your plugin](/extend/plugin-doc.md) for tips and guidelines.
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ mapped_pages:
|
|||
|
||||
Documentation is a required component of your plugin. Quality documentation with good examples contributes to the adoption of your plugin.
|
||||
|
||||
The documentation that you write for your plugin will be generated and published in the [Logstash Reference](/reference/index.md) and the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md).
|
||||
The documentation that you write for your plugin will be generated and published in the [Logstash Reference](/reference/index.md) and the [Logstash Versioned Plugin Reference](logstash-docs-md://vpr/integration-plugins.md).
|
||||
|
||||
::::{admonition} Plugin listing in {{ls}} Reference
|
||||
:class: note
|
||||
|
@ -26,7 +26,7 @@ Documentation belongs in a single file called *docs/index.asciidoc*. It belongs
|
|||
|
||||
## Heading IDs [heading-ids]
|
||||
|
||||
Format heading anchors with variables that can support generated IDs. This approach creates unique IDs when the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md) is built. Unique heading IDs are required to avoid duplication over multiple versions of a plugin.
|
||||
Format heading anchors with variables that can support generated IDs. This approach creates unique IDs when the [Logstash Versioned Plugin Reference](logstash-docs-md://vpr/integration-plugins.md) is built. Unique heading IDs are required to avoid duplication over multiple versions of a plugin.
|
||||
|
||||
**Example**
|
||||
|
||||
|
@ -39,7 +39,7 @@ Instead, use variables to define it:
|
|||
==== Configuration models
|
||||
```
|
||||
|
||||
If you hardcode an ID, the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md) builds correctly the first time. The second time the doc build runs, the ID is flagged as a duplicate, and the build fails.
|
||||
If you hardcode an ID, the [Logstash Versioned Plugin Reference](logstash-docs-md://vpr/integration-plugins.md) builds correctly the first time. The second time the doc build runs, the ID is flagged as a duplicate, and the build fails.
|
||||
|
||||
|
||||
## Link formats [link-format]
|
||||
|
@ -136,7 +136,7 @@ match => {
|
|||
|
||||
## Where’s my doc? [_wheres_my_doc]
|
||||
|
||||
Plugin documentation goes through several steps before it gets published in the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md) and the [Logstash Reference](/reference/index.md).
|
||||
Plugin documentation goes through several steps before it gets published in the [Logstash Versioned Plugin Reference](logstash-docs-md://vpr/integration-plugins.md) and the [Logstash Reference](/reference/index.md).
|
||||
|
||||
Here’s an overview of the workflow:
|
||||
|
||||
|
@ -145,7 +145,7 @@ Here’s an overview of the workflow:
|
|||
* Wait for the continuous integration build to complete successfully.
|
||||
* Publish the plugin to [https://rubygems.org](https://rubygems.org).
|
||||
* A script detects the new or changed version, and picks up the `index.asciidoc` file for inclusion in the doc build.
|
||||
* The documentation for your new plugin is published in the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md).
|
||||
* The documentation for your new plugin is published in the [Logstash Versioned Plugin Reference](logstash-docs-md://vpr/integration-plugins.md).
|
||||
|
||||
We’re not done yet.
|
||||
|
||||
|
|
|
@ -13,14 +13,14 @@ To get started, go [here](https://download.elastic.co/demos/logstash/gettingstar
|
|||
|
||||
## Configuring Filebeat to Send Log Lines to Logstash [configuring-filebeat]
|
||||
|
||||
Before you create the Logstash pipeline, you’ll configure Filebeat to send log lines to Logstash. The [Filebeat](https://github.com/elastic/beats/tree/main/filebeat) client is a lightweight, resource-friendly tool that collects logs from files on the server and forwards these logs to your Logstash instance for processing. Filebeat is designed for reliability and low latency. Filebeat has a light resource footprint on the host machine, and the [`Beats input`](/reference/plugins-inputs-beats.md) plugin minimizes the resource demands on the Logstash instance.
|
||||
Before you create the Logstash pipeline, you’ll configure Filebeat to send log lines to Logstash. The [Filebeat](https://github.com/elastic/beats/tree/main/filebeat) client is a lightweight, resource-friendly tool that collects logs from files on the server and forwards these logs to your Logstash instance for processing. Filebeat is designed for reliability and low latency. Filebeat has a light resource footprint on the host machine, and the [`Beats input`](logstash-docs-md://lsr/plugins-inputs-beats.md) plugin minimizes the resource demands on the Logstash instance.
|
||||
|
||||
::::{note}
|
||||
In a typical use case, Filebeat runs on a separate machine from the machine running your Logstash instance. For the purposes of this tutorial, Logstash and Filebeat are running on the same machine.
|
||||
::::
|
||||
|
||||
|
||||
The default Logstash installation includes the [`Beats input`](/reference/plugins-inputs-beats.md) plugin. The Beats input plugin enables Logstash to receive events from the Elastic Beats framework, which means that any Beat written to work with the Beats framework, such as Packetbeat and Metricbeat, can also send event data to Logstash.
|
||||
The default Logstash installation includes the [`Beats input`](logstash-docs-md://lsr/plugins-inputs-beats.md) plugin. The Beats input plugin enables Logstash to receive events from the Elastic Beats framework, which means that any Beat written to work with the Beats framework, such as Packetbeat and Metricbeat, can also send event data to Logstash.
|
||||
|
||||
To install Filebeat on your data source machine, download the appropriate package from the Filebeat [product page](https://www.elastic.co/downloads/beats/filebeat). You can also refer to [Filebeat quick start](beats://reference/filebeat/filebeat-installation-configuration.md) for additional installation instructions.
|
||||
|
||||
|
@ -163,7 +163,7 @@ If your pipeline is working correctly, you should see a series of events like th
|
|||
|
||||
Now you have a working pipeline that reads log lines from Filebeat. However you’ll notice that the format of the log messages is not ideal. You want to parse the log messages to create specific, named fields from the logs. To do this, you’ll use the `grok` filter plugin.
|
||||
|
||||
The [`grok`](/reference/plugins-filters-grok.md) filter plugin is one of several plugins that are available by default in Logstash. For details on how to manage Logstash plugins, see the [reference documentation](/reference/working-with-plugins.md) for the plugin manager.
|
||||
The [`grok`](logstash-docs-md://lsr/plugins-filters-grok.md) filter plugin is one of several plugins that are available by default in Logstash. For details on how to manage Logstash plugins, see the [reference documentation](/reference/working-with-plugins.md) for the plugin manager.
|
||||
|
||||
The `grok` filter plugin enables you to parse the unstructured log data into something structured and queryable.
|
||||
|
||||
|
@ -305,7 +305,7 @@ Notice that the event includes the original message, but the log message is also
|
|||
|
||||
### Enhancing Your Data with the Geoip Filter Plugin [configuring-geoip-plugin]
|
||||
|
||||
In addition to parsing log data for better searches, filter plugins can derive supplementary information from existing data. As an example, the [`geoip`](/reference/plugins-filters-geoip.md) plugin looks up IP addresses, derives geographic location information from the addresses, and adds that location information to the logs.
|
||||
In addition to parsing log data for better searches, filter plugins can derive supplementary information from existing data. As an example, the [`geoip`](logstash-docs-md://lsr/plugins-filters-geoip.md) plugin looks up IP addresses, derives geographic location information from the addresses, and adds that location information to the logs.
|
||||
|
||||
Configure your Logstash instance to use the `geoip` filter plugin by adding the following lines to the `filter` section of the `first-pipeline.conf` file:
|
||||
|
||||
|
|
|
@ -1,67 +0,0 @@
|
|||
---
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/codec-plugins.html
|
||||
---
|
||||
|
||||
# Codec plugins [codec-plugins]
|
||||
|
||||
A codec plugin changes the data representation of an event. Codecs are essentially stream filters that can operate as part of an input or output.
|
||||
|
||||
The following codec plugins are available below. For a list of Elastic supported plugins, please consult the [Support Matrix](https://www.elastic.co/support/matrix#show_logstash_plugins).
|
||||
|
||||
| | | |
|
||||
| --- | --- | --- |
|
||||
| Plugin | Description | Github repository |
|
||||
| [avro](/reference/plugins-codecs-avro.md) | Reads serialized Avro records as Logstash events | [logstash-codec-avro](https://github.com/logstash-plugins/logstash-codec-avro) |
|
||||
| [cef](/reference/plugins-codecs-cef.md) | Reads the ArcSight Common Event Format (CEF). | [logstash-codec-cef](https://github.com/logstash-plugins/logstash-codec-cef) |
|
||||
| [cloudfront](/reference/plugins-codecs-cloudfront.md) | Reads AWS CloudFront reports | [logstash-codec-cloudfront](https://github.com/logstash-plugins/logstash-codec-cloudfront) |
|
||||
| [cloudtrail](/reference/plugins-codecs-cloudtrail.md) | Reads AWS CloudTrail log files | [logstash-codec-cloudtrail](https://github.com/logstash-plugins/logstash-codec-cloudtrail) |
|
||||
| [collectd](/reference/plugins-codecs-collectd.md) | Reads events from the `collectd` binary protocol using UDP. | [logstash-codec-collectd](https://github.com/logstash-plugins/logstash-codec-collectd) |
|
||||
| [csv](/reference/plugins-codecs-csv.md) | Takes CSV data, parses it, and passes it along. | [logstash-codec-csv](https://github.com/logstash-plugins/logstash-codec-csv) |
|
||||
| [dots](/reference/plugins-codecs-dots.md) | Sends 1 dot per event to `stdout` for performance tracking | [logstash-codec-dots](https://github.com/logstash-plugins/logstash-codec-dots) |
|
||||
| [edn](/reference/plugins-codecs-edn.md) | Reads EDN format data | [logstash-codec-edn](https://github.com/logstash-plugins/logstash-codec-edn) |
|
||||
| [edn_lines](/reference/plugins-codecs-edn_lines.md) | Reads newline-delimited EDN format data | [logstash-codec-edn_lines](https://github.com/logstash-plugins/logstash-codec-edn_lines) |
|
||||
| [es_bulk](/reference/plugins-codecs-es_bulk.md) | Reads the Elasticsearch bulk format into separate events, along with metadata | [logstash-codec-es_bulk](https://github.com/logstash-plugins/logstash-codec-es_bulk) |
|
||||
| [fluent](/reference/plugins-codecs-fluent.md) | Reads the `fluentd` `msgpack` schema | [logstash-codec-fluent](https://github.com/logstash-plugins/logstash-codec-fluent) |
|
||||
| [graphite](/reference/plugins-codecs-graphite.md) | Reads `graphite` formatted lines | [logstash-codec-graphite](https://github.com/logstash-plugins/logstash-codec-graphite) |
|
||||
| [gzip_lines](/reference/plugins-codecs-gzip_lines.md) | Reads `gzip` encoded content | [logstash-codec-gzip_lines](https://github.com/logstash-plugins/logstash-codec-gzip_lines) |
|
||||
| [jdots](/reference/plugins-codecs-jdots.md) | Renders each processed event as a dot | [core plugin](https://github.com/elastic/logstash/blob/master/logstash-core/src/main/java/org/logstash/plugins/codecs/Dots.java) |
|
||||
| [java_line](/reference/plugins-codecs-java_line.md) | Encodes and decodes line-oriented text data | [core plugin](https://github.com/elastic/logstash/blob/master/logstash-core/src/main/java/org/logstash/plugins/codecs/Line.java) |
|
||||
| [java_plain](/reference/plugins-codecs-java_plain.md) | Processes text data with no delimiters between events | [core plugin](https://github.com/elastic/logstash/blob/master/logstash-core/src/main/java/org/logstash/plugins/codecs/Plain.java) |
|
||||
| [json](/reference/plugins-codecs-json.md) | Reads JSON formatted content, creating one event per element in a JSON array | [logstash-codec-json](https://github.com/logstash-plugins/logstash-codec-json) |
|
||||
| [json_lines](/reference/plugins-codecs-json_lines.md) | Reads newline-delimited JSON | [logstash-codec-json_lines](https://github.com/logstash-plugins/logstash-codec-json_lines) |
|
||||
| [line](/reference/plugins-codecs-line.md) | Reads line-oriented text data | [logstash-codec-line](https://github.com/logstash-plugins/logstash-codec-line) |
|
||||
| [msgpack](/reference/plugins-codecs-msgpack.md) | Reads MessagePack encoded content | [logstash-codec-msgpack](https://github.com/logstash-plugins/logstash-codec-msgpack) |
|
||||
| [multiline](/reference/plugins-codecs-multiline.md) | Merges multiline messages into a single event | [logstash-codec-multiline](https://github.com/logstash-plugins/logstash-codec-multiline) |
|
||||
| [netflow](/reference/plugins-codecs-netflow.md) | Reads Netflow v5 and Netflow v9 data | [logstash-codec-netflow](https://github.com/logstash-plugins/logstash-codec-netflow) |
|
||||
| [nmap](/reference/plugins-codecs-nmap.md) | Reads Nmap data in XML format | [logstash-codec-nmap](https://github.com/logstash-plugins/logstash-codec-nmap) |
|
||||
| [plain](/reference/plugins-codecs-plain.md) | Reads plaintext with no delimiting between events | [logstash-codec-plain](https://github.com/logstash-plugins/logstash-codec-plain) |
|
||||
| [protobuf](/reference/plugins-codecs-protobuf.md) | Reads protobuf messages and converts to Logstash Events | [logstash-codec-protobuf](https://github.com/logstash-plugins/logstash-codec-protobuf) |
|
||||
| [rubydebug](/reference/plugins-codecs-rubydebug.md) | Applies the Ruby Awesome Print library to Logstash events | [logstash-codec-rubydebug](https://github.com/logstash-plugins/logstash-codec-rubydebug) |
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -50,7 +50,7 @@ input {
|
|||
|
||||
In this example, two settings are configured for each of the file inputs: *port* and *tags*.
|
||||
|
||||
The settings you can configure vary according to the plugin type. For information about each plugin, see [Input Plugins](/reference/input-plugins.md), [Output Plugins](/reference/output-plugins.md), [Filter Plugins](/reference/filter-plugins.md), and [Codec Plugins](/reference/codec-plugins.md).
|
||||
The settings you can configure vary according to the plugin type. For information about each plugin, see [Input Plugins](logstash-docs-md://lsr/input-plugins.md), [Output Plugins](logstash-docs-md://lsr/output-plugins.md), [Filter Plugins](logstash-docs-md://lsr/filter-plugins.md), and [Codec Plugins](logstash-docs-md://lsr/codec-plugins.md).
|
||||
|
||||
|
||||
## Value types [plugin-value-types]
|
||||
|
@ -113,7 +113,7 @@ A codec is the name of Logstash codec used to represent the data. Codecs can be
|
|||
|
||||
Input codecs provide a convenient way to decode your data before it enters the input. Output codecs provide a convenient way to encode your data before it leaves the output. Using an input or output codec eliminates the need for a separate filter in your Logstash pipeline.
|
||||
|
||||
A list of available codecs can be found at the [Codec Plugins](/reference/codec-plugins.md) page.
|
||||
A list of available codecs can be found at the [Codec Plugins](logstash-docs-md://lsr/codec-plugins.md) page.
|
||||
|
||||
Example:
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ mapped_pages:
|
|||
|
||||
# Sending data to Elastic Cloud (hosted Elasticsearch Service) [connecting-to-cloud]
|
||||
|
||||
Our hosted {{ess}} on [Elastic Cloud](https://cloud.elastic.co/) simplifies safe, secure communication between {{ls}} and {{es}}. When you configure the Elasticsearch output plugin to use [`cloud_id`](/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-cloud_id) with either the [`cloud_auth` option](/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-cloud_auth) or the [`api_key` option](/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-api_key), no additional SSL configuration is needed.
|
||||
Our hosted {{ess}} on [Elastic Cloud](https://cloud.elastic.co/) simplifies safe, secure communication between {{ls}} and {{es}}. When you configure the Elasticsearch output plugin to use [`cloud_id`](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-cloud_id) with either the [`cloud_auth` option](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-cloud_auth) or the [`api_key` option](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-api_key), no additional SSL configuration is needed.
|
||||
|
||||
Examples:
|
||||
|
||||
|
@ -28,9 +28,9 @@ Cloud Auth is optional. Construct this value by following this format "<username
|
|||
|
||||
The Elasticsearch input, output, and filter plugins support cloud_id and cloud_auth in their configurations.
|
||||
|
||||
* [Elasticsearch input plugin](/reference/plugins-inputs-elasticsearch.md#plugins-inputs-elasticsearch-cloud_id)
|
||||
* [Elasticsearch filter plugin](/reference/plugins-filters-elasticsearch.md#plugins-filters-elasticsearch-cloud_id)
|
||||
* [Elasticsearch output plugin](/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-cloud_id)
|
||||
* [Elasticsearch input plugin](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md#plugins-inputs-elasticsearch-cloud_id)
|
||||
* [Elasticsearch filter plugin](logstash-docs-md://lsr/plugins-filters-elasticsearch.md#plugins-filters-elasticsearch-cloud_id)
|
||||
* [Elasticsearch output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-cloud_id)
|
||||
|
||||
|
||||
## Sending {{ls}} management data to {{es}} Services [cloud-id-mgmt]
|
||||
|
|
|
@ -7,7 +7,7 @@ mapped_pages:
|
|||
|
||||
The plugins described in this section are useful for core operations, such as mutating and dropping events.
|
||||
|
||||
[date filter](/reference/plugins-filters-date.md)
|
||||
[date filter](logstash-docs-md://lsr/plugins-filters-date.md)
|
||||
: Parses dates from fields to use as Logstash timestamps for events.
|
||||
|
||||
The following config parses a field called `logdate` to set the Logstash timestamp:
|
||||
|
@ -21,7 +21,7 @@ The plugins described in this section are useful for core operations, such as mu
|
|||
```
|
||||
|
||||
|
||||
[drop filter](/reference/plugins-filters-drop.md)
|
||||
[drop filter](logstash-docs-md://lsr/plugins-filters-drop.md)
|
||||
: Drops events. This filter is typically used in combination with conditionals.
|
||||
|
||||
The following config drops `debug` level log messages:
|
||||
|
@ -35,7 +35,7 @@ The plugins described in this section are useful for core operations, such as mu
|
|||
```
|
||||
|
||||
|
||||
[fingerprint filter](/reference/plugins-filters-fingerprint.md)
|
||||
[fingerprint filter](logstash-docs-md://lsr/plugins-filters-fingerprint.md)
|
||||
: Fingerprints fields by applying a consistent hash.
|
||||
|
||||
The following config fingerprints the `IP`, `@timestamp`, and `message` fields and adds the hash to a metadata field called `generated_id`:
|
||||
|
@ -52,7 +52,7 @@ The plugins described in this section are useful for core operations, such as mu
|
|||
```
|
||||
|
||||
|
||||
[mutate filter](/reference/plugins-filters-mutate.md)
|
||||
[mutate filter](logstash-docs-md://lsr/plugins-filters-mutate.md)
|
||||
: Performs general mutations on fields. You can rename, remove, replace, and modify fields in your events.
|
||||
|
||||
The following config renames the `HOSTORIP` field to `client_ip`:
|
||||
|
@ -76,7 +76,7 @@ The plugins described in this section are useful for core operations, such as mu
|
|||
```
|
||||
|
||||
|
||||
[ruby filter](/reference/plugins-filters-ruby.md)
|
||||
[ruby filter](logstash-docs-md://lsr/plugins-filters-ruby.md)
|
||||
: Executes Ruby code.
|
||||
|
||||
The following config executes Ruby code that cancels 90% of the events:
|
||||
|
|
|
@ -5,7 +5,7 @@ mapped_pages:
|
|||
|
||||
# Creating a Logstash Pipeline [configuration]
|
||||
|
||||
You can create a pipeline by stringing together plugins--[inputs](/reference/input-plugins.md), [outputs](/reference/output-plugins.md), [filters](/reference/filter-plugins.md), and sometimes [codecs](/reference/codec-plugins.md)--in order to process data. To build a Logstash pipeline, create a config file to specify which plugins you want to use and the settings for each plugin.
|
||||
You can create a pipeline by stringing together plugins--[inputs](logstash-docs-md://lsr/input-plugins.md), [outputs](logstash-docs-md://lsr/output-plugins.md), [filters](logstash-docs-md://lsr/filter-plugins.md), and sometimes [codecs](logstash-docs-md://lsr/codec-plugins.md)--in order to process data. To build a Logstash pipeline, create a config file to specify which plugins you want to use and the settings for each plugin.
|
||||
|
||||
A very basic pipeline might contain only an input and an output. Most pipelines include at least one filter plugin because that’s where the "transform" part of the ETL (extract, transform, load) magic happens. You can reference event fields in a pipeline and use conditionals to process events when they meet certain criteria.
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ mapped_pages:
|
|||
|
||||
The plugins described in this section are useful for deserializing data into Logstash events.
|
||||
|
||||
[avro codec](/reference/plugins-codecs-avro.md)
|
||||
[avro codec](logstash-docs-md://lsr/plugins-codecs-avro.md)
|
||||
: Reads serialized Avro records as Logstash events. This plugin deserializes individual Avro records. It is not for reading Avro files. Avro files have a unique format that must be handled upon input.
|
||||
|
||||
The following config deserializes input from Kafka:
|
||||
|
@ -26,7 +26,7 @@ The plugins described in this section are useful for deserializing data into Log
|
|||
```
|
||||
|
||||
|
||||
[csv filter](/reference/plugins-filters-csv.md)
|
||||
[csv filter](logstash-docs-md://lsr/plugins-filters-csv.md)
|
||||
: Parses comma-separated value data into individual fields. By default, the filter autogenerates field names (column1, column2, and so on), or you can specify a list of names. You can also change the column separator.
|
||||
|
||||
The following config parses CSV data into the field names specified in the `columns` field:
|
||||
|
@ -41,7 +41,7 @@ The plugins described in this section are useful for deserializing data into Log
|
|||
```
|
||||
|
||||
|
||||
[fluent codec](/reference/plugins-codecs-fluent.md)
|
||||
[fluent codec](logstash-docs-md://lsr/plugins-codecs-fluent.md)
|
||||
: Reads the Fluentd `msgpack` schema.
|
||||
|
||||
The following config decodes logs received from `fluent-logger-ruby`:
|
||||
|
@ -56,7 +56,7 @@ The plugins described in this section are useful for deserializing data into Log
|
|||
```
|
||||
|
||||
|
||||
[json codec](/reference/plugins-codecs-json.md)
|
||||
[json codec](logstash-docs-md://lsr/plugins-codecs-json.md)
|
||||
: Decodes (via inputs) and encodes (via outputs) JSON formatted content, creating one event per element in a JSON array.
|
||||
|
||||
The following config decodes the JSON formatted content in a file:
|
||||
|
@ -70,7 +70,7 @@ The plugins described in this section are useful for deserializing data into Log
|
|||
```
|
||||
|
||||
|
||||
[protobuf codec](/reference/plugins-codecs-protobuf.md)
|
||||
[protobuf codec](logstash-docs-md://lsr/plugins-codecs-protobuf.md)
|
||||
: Reads protobuf encoded messages and converts them to Logstash events. Requires the protobuf definitions to be compiled as Ruby files. You can compile them by using the [ruby-protoc compiler](https://github.com/codekitchen/ruby-protocol-buffers).
|
||||
|
||||
The following config decodes events from a Kafka stream:
|
||||
|
@ -89,7 +89,7 @@ The plugins described in this section are useful for deserializing data into Log
|
|||
```
|
||||
|
||||
|
||||
[xml filter](/reference/plugins-filters-xml.md)
|
||||
[xml filter](logstash-docs-md://lsr/plugins-filters-xml.md)
|
||||
: Parses XML into fields.
|
||||
|
||||
The following config parses the whole XML document stored in the `message` field:
|
||||
|
|
|
@ -7,7 +7,7 @@ mapped_pages:
|
|||
|
||||
The dead letter queue (DLQ) is designed as a place to temporarily write events that cannot be processed. The DLQ gives you flexibility to investigate problematic events without blocking the pipeline or losing the events. Your pipeline keeps flowing, and the immediate problem is averted. But those events still need to be addressed.
|
||||
|
||||
You can [process events from the DLQ](#es-proc-dlq) with the [`dead_letter_queue` input plugin](/reference/plugins-inputs-dead_letter_queue.md) .
|
||||
You can [process events from the DLQ](#es-proc-dlq) with the [`dead_letter_queue` input plugin](logstash-docs-md://lsr/plugins-inputs-dead_letter_queue.md) .
|
||||
|
||||
Processing events does not delete items from the queue, and the DLQ sometimes needs attention. See [Track dead letter queue size](#dlq-size) and [Clear the dead letter queue](#dlq-clear) for more info.
|
||||
|
||||
|
@ -16,13 +16,13 @@ Processing events does not delete items from the queue, and the DLQ sometimes ne
|
|||
By default, when Logstash encounters an event that it cannot process because the data contains a mapping error or some other issue, the Logstash pipeline either hangs or drops the unsuccessful event. In order to protect against data loss in this situation, you can [configure Logstash](#configuring-dlq) to write unsuccessful events to a dead letter queue instead of dropping them.
|
||||
|
||||
::::{note}
|
||||
The dead letter queue is currently supported only for the [{{es}} output](/reference/plugins-outputs-elasticsearch.md) and [conditional statements evaluation](/reference/event-dependent-configuration.md#conditionals). The dead letter queue is used for documents with response codes of 400 or 404, both of which indicate an event that cannot be retried. It’s also used when a conditional evaluation encounter an error.
|
||||
The dead letter queue is currently supported only for the [{{es}} output](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) and [conditional statements evaluation](/reference/event-dependent-configuration.md#conditionals). The dead letter queue is used for documents with response codes of 400 or 404, both of which indicate an event that cannot be retried. It’s also used when a conditional evaluation encounter an error.
|
||||
::::
|
||||
|
||||
|
||||
Each event written to the dead letter queue includes the original event, metadata that describes the reason the event could not be processed, information about the plugin that wrote the event, and the timestamp when the event entered the dead letter queue.
|
||||
|
||||
To process events in the dead letter queue, create a Logstash pipeline configuration that uses the [`dead_letter_queue` input plugin](/reference/plugins-inputs-dead_letter_queue.md) to read from the queue. See [Processing events in the dead letter queue](#processing-dlq-events) for more information.
|
||||
To process events in the dead letter queue, create a Logstash pipeline configuration that uses the [`dead_letter_queue` input plugin](logstash-docs-md://lsr/plugins-inputs-dead_letter_queue.md) to read from the queue. See [Processing events in the dead letter queue](#processing-dlq-events) for more information.
|
||||
|
||||
:::{image} images/dead_letter_queue.png
|
||||
:alt: Diagram showing pipeline reading from the dead letter queue
|
||||
|
@ -121,7 +121,7 @@ input {
|
|||
|
||||
## Processing events in the dead letter queue [processing-dlq-events]
|
||||
|
||||
When you are ready to process events in the dead letter queue, you create a pipeline that uses the [`dead_letter_queue` input plugin](/reference/plugins-inputs-dead_letter_queue.md) to read from the dead letter queue. The pipeline configuration that you use depends, of course, on what you need to do. For example, if the dead letter queue contains events that resulted from a mapping error in Elasticsearch, you can create a pipeline that reads the "dead" events, removes the field that caused the mapping issue, and re-indexes the clean events into Elasticsearch.
|
||||
When you are ready to process events in the dead letter queue, you create a pipeline that uses the [`dead_letter_queue` input plugin](logstash-docs-md://lsr/plugins-inputs-dead_letter_queue.md) to read from the dead letter queue. The pipeline configuration that you use depends, of course, on what you need to do. For example, if the dead letter queue contains events that resulted from a mapping error in Elasticsearch, you can create a pipeline that reads the "dead" events, removes the field that caused the mapping issue, and re-indexes the clean events into Elasticsearch.
|
||||
|
||||
The following example shows a simple pipeline that reads events from the dead letter queue and writes the events, including metadata, to standard output:
|
||||
|
||||
|
@ -151,7 +151,7 @@ For another example, see [Example: Processing data that has mapping errors](#dlq
|
|||
When the pipeline has finished processing all the events in the dead letter queue, it will continue to run and process new events as they stream into the queue. This means that you do not need to stop your production system to handle events in the dead letter queue.
|
||||
|
||||
::::{note}
|
||||
Events emitted from the [`dead_letter_queue` input plugin](/reference/plugins-inputs-dead_letter_queue.md) plugin will not be resubmitted to the dead letter queue if they cannot be processed correctly.
|
||||
Events emitted from the [`dead_letter_queue` input plugin](logstash-docs-md://lsr/plugins-inputs-dead_letter_queue.md) plugin will not be resubmitted to the dead letter queue if they cannot be processed correctly.
|
||||
::::
|
||||
|
||||
|
||||
|
@ -223,7 +223,7 @@ output {
|
|||
}
|
||||
```
|
||||
|
||||
1. The [`dead_letter_queue` input](/reference/plugins-inputs-dead_letter_queue.md) reads from the dead letter queue.
|
||||
1. The [`dead_letter_queue` input](logstash-docs-md://lsr/plugins-inputs-dead_letter_queue.md) reads from the dead letter queue.
|
||||
2. The `mutate` filter removes the problem field called `location`.
|
||||
3. The clean event is sent to Elasticsearch, where it can be indexed because the mapping issue is resolved.
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ Beats and Logstash make ingest awesome. Together, they provide a comprehensive s
|
|||
|
||||
### Beats and Logstash [_beats_and_logstash]
|
||||
|
||||
Beats run across thousands of edge host servers, collecting, tailing, and shipping logs to Logstash. Logstash serves as the centralized streaming engine for data unification and enrichment. The [Beats input plugin](/reference/plugins-inputs-beats.md) exposes a secure, acknowledgement-based endpoint for Beats to send data to Logstash.
|
||||
Beats run across thousands of edge host servers, collecting, tailing, and shipping logs to Logstash. Logstash serves as the centralized streaming engine for data unification and enrichment. The [Beats input plugin](logstash-docs-md://lsr/plugins-inputs-beats.md) exposes a secure, acknowledgement-based endpoint for Beats to send data to Logstash.
|
||||
|
||||
:::{image} images/deploy2.png
|
||||
:alt: deploy2
|
||||
|
@ -75,7 +75,7 @@ Make sure `queue.checkpoint.writes: 1` is set for at-least-once guarantees. For
|
|||
|
||||
### Processing [_processing]
|
||||
|
||||
Logstash will commonly extract fields with [grok](/reference/plugins-filters-grok.md) or [dissect](/reference/plugins-filters-dissect.md), augment [geographical](/reference/plugins-filters-geoip.md) info, and can further enrich events with [file](/reference/plugins-filters-translate.md), [database](/reference/plugins-filters-jdbc_streaming.md), or [Elasticsearch](/reference/plugins-filters-elasticsearch.md) lookup datasets. Be aware that processing complexity can affect overall throughput and CPU utilization. Make sure to check out the other [available filter plugins](/reference/filter-plugins.md).
|
||||
Logstash will commonly extract fields with [grok](logstash-docs-md://lsr/plugins-filters-grok.md) or [dissect](logstash-docs-md://lsr/plugins-filters-dissect.md), augment [geographical](logstash-docs-md://lsr/plugins-filters-geoip.md) info, and can further enrich events with [file](logstash-docs-md://lsr/plugins-filters-translate.md), [database](logstash-docs-md://lsr/plugins-filters-jdbc_streaming.md), or [Elasticsearch](logstash-docs-md://lsr/plugins-filters-elasticsearch.md) lookup datasets. Be aware that processing complexity can affect overall throughput and CPU utilization. Make sure to check out the other [available filter plugins](logstash-docs-md://lsr/filter-plugins.md).
|
||||
|
||||
|
||||
### Secure Transport [_secure_transport]
|
||||
|
@ -104,7 +104,7 @@ Users may have other mechanisms of collecting logging data, and it’s easy to i
|
|||
|
||||
### TCP, UDP, and HTTP Protocols [_tcp_udp_and_http_protocols]
|
||||
|
||||
The TCP, UDP, and HTTP protocols are common ways to feed data into Logstash. Logstash can expose endpoint listeners with the respective [TCP](/reference/plugins-inputs-tcp.md), [UDP](/reference/plugins-inputs-udp.md), and [HTTP](/reference/plugins-inputs-http.md) input plugins. The data sources enumerated below are typically ingested through one of these three protocols.
|
||||
The TCP, UDP, and HTTP protocols are common ways to feed data into Logstash. Logstash can expose endpoint listeners with the respective [TCP](logstash-docs-md://lsr/plugins-inputs-tcp.md), [UDP](logstash-docs-md://lsr/plugins-inputs-udp.md), and [HTTP](logstash-docs-md://lsr/plugins-inputs-http.md) input plugins. The data sources enumerated below are typically ingested through one of these three protocols.
|
||||
|
||||
::::{note}
|
||||
The TCP and UDP protocols do not support application-level acknowledgements, so connectivity issues may result in data loss.
|
||||
|
@ -119,20 +119,20 @@ For high availability scenarios, a third-party hardware or software load balance
|
|||
Although Beats may already satisfy your data ingest use case, network and security datasets come in a variety of forms. Let’s touch on a few other ingestion points.
|
||||
|
||||
* Network wire data - collect and analyze network traffic with [Packetbeat](https://www.elastic.co/products/beats/packetbeat).
|
||||
* Netflow v5/v9/v10 - Logstash understands data from Netflow/IPFIX exporters with the [Netflow codec](/reference/plugins-codecs-netflow.md).
|
||||
* Nmap - Logstash accepts and parses Nmap XML data with the [Nmap codec](/reference/plugins-codecs-nmap.md).
|
||||
* SNMP trap - Logstash has a native [SNMP trap input](/reference/plugins-inputs-snmptrap.md).
|
||||
* CEF - Logstash accepts and parses CEF data from systems like Arcsight SmartConnectors with the [CEF codec](/reference/plugins-codecs-cef.md).
|
||||
* Netflow v5/v9/v10 - Logstash understands data from Netflow/IPFIX exporters with the [Netflow codec](logstash-docs-md://lsr/plugins-codecs-netflow.md).
|
||||
* Nmap - Logstash accepts and parses Nmap XML data with the [Nmap codec](logstash-docs-md://lsr/plugins-codecs-nmap.md).
|
||||
* SNMP trap - Logstash has a native [SNMP trap input](logstash-docs-md://lsr/plugins-inputs-snmptrap.md).
|
||||
* CEF - Logstash accepts and parses CEF data from systems like Arcsight SmartConnectors with the [CEF codec](logstash-docs-md://lsr/plugins-codecs-cef.md).
|
||||
|
||||
|
||||
### Centralized Syslog Servers [_centralized_syslog_servers]
|
||||
|
||||
Existing syslog server technologies like rsyslog and syslog-ng generally send syslog over to Logstash TCP or UDP endpoints for extraction, processing, and persistence. If the data format conforms to RFC3164, it can be fed directly to the [Logstash syslog input](/reference/plugins-inputs-syslog.md).
|
||||
Existing syslog server technologies like rsyslog and syslog-ng generally send syslog over to Logstash TCP or UDP endpoints for extraction, processing, and persistence. If the data format conforms to RFC3164, it can be fed directly to the [Logstash syslog input](logstash-docs-md://lsr/plugins-inputs-syslog.md).
|
||||
|
||||
|
||||
### Infrastructure & Application Data and IoT [_infrastructure_application_data_and_iot]
|
||||
|
||||
Infrastructure and application metrics can be collected with [Metricbeat](https://www.elastic.co/products/beats/metricbeat), but applications can also send webhooks to a Logstash HTTP input or have metrics polled from an HTTP endpoint with the [HTTP poller input plugin](/reference/plugins-inputs-http_poller.md).
|
||||
Infrastructure and application metrics can be collected with [Metricbeat](https://www.elastic.co/products/beats/metricbeat), but applications can also send webhooks to a Logstash HTTP input or have metrics polled from an HTTP endpoint with the [HTTP poller input plugin](logstash-docs-md://lsr/plugins-inputs-http_poller.md).
|
||||
|
||||
For applications that log with log4j2, it’s recommended to use the SocketAppender to send JSON to the Logstash TCP input. Alternatively, log4j2 can also log to a file for collection with FIlebeat. Usage of the log4j1 SocketAppender is not recommended.
|
||||
|
||||
|
@ -149,7 +149,7 @@ For users who want to integrate data from existing Kafka deployments or require
|
|||
:alt: deploy4
|
||||
:::
|
||||
|
||||
The other TCP, UDP, and HTTP sources can persist to Kafka with Logstash as a conduit to achieve high availability in lieu of a load balancer. A group of Logstash nodes can then consume from topics with the [Kafka input](/reference/plugins-inputs-kafka.md) to further transform and enrich the data in transit.
|
||||
The other TCP, UDP, and HTTP sources can persist to Kafka with Logstash as a conduit to achieve high availability in lieu of a load balancer. A group of Logstash nodes can then consume from topics with the [Kafka input](logstash-docs-md://lsr/plugins-inputs-kafka.md) to further transform and enrich the data in transit.
|
||||
|
||||
|
||||
### Resiliency and Recovery [_resiliency_and_recovery]
|
||||
|
@ -161,5 +161,5 @@ If Kafka is configured to retain data for an extended period of time, data can b
|
|||
|
||||
### Other Messaging Queue Integrations [_other_messaging_queue_integrations]
|
||||
|
||||
Although an additional queuing layer is not required, Logstash can consume from a myriad of other message queuing technologies like [RabbitMQ](/reference/plugins-inputs-rabbitmq.md) and [Redis](/reference/plugins-inputs-redis.md). It also supports ingestion from hosted queuing services like [Pub/Sub](/reference/plugins-inputs-google_pubsub.md), [Kinesis](/reference/plugins-inputs-kinesis.md), and [SQS](/reference/plugins-inputs-sqs.md).
|
||||
Although an additional queuing layer is not required, Logstash can consume from a myriad of other message queuing technologies like [RabbitMQ](logstash-docs-md://lsr/plugins-inputs-rabbitmq.md) and [Redis](logstash-docs-md://lsr/plugins-inputs-redis.md). It also supports ingestion from hosted queuing services like [Pub/Sub](logstash-docs-md://lsr/plugins-inputs-google_pubsub.md), [Kinesis](logstash-docs-md://lsr/plugins-inputs-kinesis.md), and [SQS](logstash-docs-md://lsr/plugins-inputs-sqs.md).
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ docker run --rm -it -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.
|
|||
|
||||
Every file in the host directory `~/pipeline/` will then be parsed by Logstash as pipeline configuration.
|
||||
|
||||
If you don’t provide configuration to Logstash, it will run with a minimal config that listens for messages from the [Beats input plugin](/reference/plugins-inputs-beats.md) and echoes any that are received to `stdout`. In this case, the startup logs will be similar to the following:
|
||||
If you don’t provide configuration to Logstash, it will run with a minimal config that listens for messages from the [Beats input plugin](logstash-docs-md://lsr/plugins-inputs-beats.md) and echoes any that are received to `stdout`. In this case, the startup logs will be similar to the following:
|
||||
|
||||
```text
|
||||
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties.
|
||||
|
|
|
@ -92,7 +92,7 @@ event.set("[foo][bar][c]", [3, 4])
|
|||
|
||||
## Ruby Filter [_ruby_filter]
|
||||
|
||||
The [Ruby Filter](/reference/plugins-filters-ruby.md) can be used to execute any ruby code and manipulate event data using the API described above. For example, using the new API:
|
||||
The [Ruby Filter](logstash-docs-md://lsr/plugins-filters-ruby.md) can be used to execute any ruby code and manipulate event data using the API described above. For example, using the new API:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
|
|
|
@ -7,7 +7,7 @@ mapped_pages:
|
|||
|
||||
The plugins described in this section are useful for extracting fields and parsing unstructured data into fields.
|
||||
|
||||
[dissect filter](/reference/plugins-filters-dissect.md)
|
||||
[dissect filter](logstash-docs-md://lsr/plugins-filters-dissect.md)
|
||||
: Extracts unstructured event data into fields by using delimiters. The dissect filter does not use regular expressions and is very fast. However, if the structure of the data varies from line to line, the grok filter is more suitable.
|
||||
|
||||
For example, let’s say you have a log that contains the following message:
|
||||
|
@ -44,7 +44,7 @@ The plugins described in this section are useful for extracting fields and parsi
|
|||
```
|
||||
|
||||
|
||||
[kv filter](/reference/plugins-filters-kv.md)
|
||||
[kv filter](logstash-docs-md://lsr/plugins-filters-kv.md)
|
||||
: Parses key-value pairs.
|
||||
|
||||
For example, let’s say you have a log message that contains the following key-value pairs:
|
||||
|
@ -67,7 +67,7 @@ The plugins described in this section are useful for extracting fields and parsi
|
|||
* `error: REFUSED`
|
||||
|
||||
|
||||
[grok filter](/reference/plugins-filters-grok.md)
|
||||
[grok filter](logstash-docs-md://lsr/plugins-filters-grok.md)
|
||||
: Parses unstructured event data into fields. This tool is perfect for syslog logs, Apache and other webserver logs, MySQL logs, and in general, any log format that is generally written for humans and not computer consumption. Grok works by combining text patterns into something that matches your logs.
|
||||
|
||||
For example, let’s say you have an HTTP request log that contains the following message:
|
||||
|
|
|
@ -1,113 +0,0 @@
|
|||
---
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
|
||||
---
|
||||
|
||||
# Filter plugins [filter-plugins]
|
||||
|
||||
A filter plugin performs intermediary processing on an event. Filters are often applied conditionally depending on the characteristics of the event.
|
||||
|
||||
The following filter plugins are available below. For a list of Elastic supported plugins, please consult the [Support Matrix](https://www.elastic.co/support/matrix#show_logstash_plugins).
|
||||
|
||||
| | | |
|
||||
| --- | --- | --- |
|
||||
| Plugin | Description | Github repository |
|
||||
| [age](/reference/plugins-filters-age.md) | Calculates the age of an event by subtracting the event timestamp from the current timestamp | [logstash-filter-age](https://github.com/logstash-plugins/logstash-filter-age) |
|
||||
| [aggregate](/reference/plugins-filters-aggregate.md) | Aggregates information from several events originating with a single task | [logstash-filter-aggregate](https://github.com/logstash-plugins/logstash-filter-aggregate) |
|
||||
| [alter](/reference/plugins-filters-alter.md) | Performs general alterations to fields that the `mutate` filter does not handle | [logstash-filter-alter](https://github.com/logstash-plugins/logstash-filter-alter) |
|
||||
| [bytes](/reference/plugins-filters-bytes.md) | Parses string representations of computer storage sizes, such as "123 MB" or "5.6gb", into their numeric value in bytes | [logstash-filter-bytes](https://github.com/logstash-plugins/logstash-filter-bytes) |
|
||||
| [cidr](/reference/plugins-filters-cidr.md) | Checks IP addresses against a list of network blocks | [logstash-filter-cidr](https://github.com/logstash-plugins/logstash-filter-cidr) |
|
||||
| [cipher](/reference/plugins-filters-cipher.md) | Applies or removes a cipher to an event | [logstash-filter-cipher](https://github.com/logstash-plugins/logstash-filter-cipher) |
|
||||
| [clone](/reference/plugins-filters-clone.md) | Duplicates events | [logstash-filter-clone](https://github.com/logstash-plugins/logstash-filter-clone) |
|
||||
| [csv](/reference/plugins-filters-csv.md) | Parses comma-separated value data into individual fields | [logstash-filter-csv](https://github.com/logstash-plugins/logstash-filter-csv) |
|
||||
| [date](/reference/plugins-filters-date.md) | Parses dates from fields to use as the Logstash timestamp for an event | [logstash-filter-date](https://github.com/logstash-plugins/logstash-filter-date) |
|
||||
| [de_dot](/reference/plugins-filters-de_dot.md) | Computationally expensive filter that removes dots from a field name | [logstash-filter-de_dot](https://github.com/logstash-plugins/logstash-filter-de_dot) |
|
||||
| [dissect](/reference/plugins-filters-dissect.md) | Extracts unstructured event data into fields using delimiters | [logstash-filter-dissect](https://github.com/logstash-plugins/logstash-filter-dissect) |
|
||||
| [dns](/reference/plugins-filters-dns.md) | Performs a standard or reverse DNS lookup | [logstash-filter-dns](https://github.com/logstash-plugins/logstash-filter-dns) |
|
||||
| [drop](/reference/plugins-filters-drop.md) | Drops all events | [logstash-filter-drop](https://github.com/logstash-plugins/logstash-filter-drop) |
|
||||
| [elapsed](/reference/plugins-filters-elapsed.md) | Calculates the elapsed time between a pair of events | [logstash-filter-elapsed](https://github.com/logstash-plugins/logstash-filter-elapsed) |
|
||||
| [elastic_integration](/reference/plugins-filters-elastic_integration.md) | Provides additional {{ls}} processing on data from Elastic integrations | [logstash-filter-elastic_integration](https://github.com/elastic/logstash-filter-elastic_integration) |
|
||||
| [elasticsearch](/reference/plugins-filters-elasticsearch.md) | Copies fields from previous log events in Elasticsearch to current events | [logstash-filter-elasticsearch](https://github.com/logstash-plugins/logstash-filter-elasticsearch) |
|
||||
| [environment](/reference/plugins-filters-environment.md) | Stores environment variables as metadata sub-fields | [logstash-filter-environment](https://github.com/logstash-plugins/logstash-filter-environment) |
|
||||
| [extractnumbers](/reference/plugins-filters-extractnumbers.md) | Extracts numbers from a string | [logstash-filter-extractnumbers](https://github.com/logstash-plugins/logstash-filter-extractnumbers) |
|
||||
| [fingerprint](/reference/plugins-filters-fingerprint.md) | Fingerprints fields by replacing values with a consistent hash | [logstash-filter-fingerprint](https://github.com/logstash-plugins/logstash-filter-fingerprint) |
|
||||
| [geoip](/reference/plugins-filters-geoip.md) | Adds geographical information about an IP address | [logstash-filter-geoip](https://github.com/logstash-plugins/logstash-filter-geoip) |
|
||||
| [grok](/reference/plugins-filters-grok.md) | Parses unstructured event data into fields | [logstash-filter-grok](https://github.com/logstash-plugins/logstash-filter-grok) |
|
||||
| [http](/reference/plugins-filters-http.md) | Provides integration with external web services/REST APIs | [logstash-filter-http](https://github.com/logstash-plugins/logstash-filter-http) |
|
||||
| [i18n](/reference/plugins-filters-i18n.md) | Removes special characters from a field | [logstash-filter-i18n](https://github.com/logstash-plugins/logstash-filter-i18n) |
|
||||
| [java_uuid](/reference/plugins-filters-java_uuid.md) | Generates a UUID and adds it to each processed event | [core plugin](https://github.com/elastic/logstash/blob/master/logstash-core/src/main/java/org/logstash/plugins/filters/Uuid.java) |
|
||||
| [jdbc_static](/reference/plugins-filters-jdbc_static.md) | Enriches events with data pre-loaded from a remote database | [logstash-integration-jdbc](https://github.com/logstash-plugins/logstash-integration-jdbc) |
|
||||
| [jdbc_streaming](/reference/plugins-filters-jdbc_streaming.md) | Enrich events with your database data | [logstash-integration-jdbc](https://github.com/logstash-plugins/logstash-integration-jdbc) |
|
||||
| [json](/reference/plugins-filters-json.md) | Parses JSON events | [logstash-filter-json](https://github.com/logstash-plugins/logstash-filter-json) |
|
||||
| [json_encode](/reference/plugins-filters-json_encode.md) | Serializes a field to JSON | [logstash-filter-json_encode](https://github.com/logstash-plugins/logstash-filter-json_encode) |
|
||||
| [kv](/reference/plugins-filters-kv.md) | Parses key-value pairs | [logstash-filter-kv](https://github.com/logstash-plugins/logstash-filter-kv) |
|
||||
| [memcached](/reference/plugins-filters-memcached.md) | Provides integration with external data in Memcached | [logstash-filter-memcached](https://github.com/logstash-plugins/logstash-filter-memcached) |
|
||||
| [metricize](/reference/plugins-filters-metricize.md) | Takes complex events containing a number of metrics and splits these up into multiple events, each holding a single metric | [logstash-filter-metricize](https://github.com/logstash-plugins/logstash-filter-metricize) |
|
||||
| [metrics](/reference/plugins-filters-metrics.md) | Aggregates metrics | [logstash-filter-metrics](https://github.com/logstash-plugins/logstash-filter-metrics) |
|
||||
| [mutate](/reference/plugins-filters-mutate.md) | Performs mutations on fields | [logstash-filter-mutate](https://github.com/logstash-plugins/logstash-filter-mutate) |
|
||||
| [prune](/reference/plugins-filters-prune.md) | Prunes event data based on a list of fields to blacklist or whitelist | [logstash-filter-prune](https://github.com/logstash-plugins/logstash-filter-prune) |
|
||||
| [range](/reference/plugins-filters-range.md) | Checks that specified fields stay within given size or length limits | [logstash-filter-range](https://github.com/logstash-plugins/logstash-filter-range) |
|
||||
| [ruby](/reference/plugins-filters-ruby.md) | Executes arbitrary Ruby code | [logstash-filter-ruby](https://github.com/logstash-plugins/logstash-filter-ruby) |
|
||||
| [sleep](/reference/plugins-filters-sleep.md) | Sleeps for a specified time span | [logstash-filter-sleep](https://github.com/logstash-plugins/logstash-filter-sleep) |
|
||||
| [split](/reference/plugins-filters-split.md) | Splits multi-line messages, strings, or arrays into distinct events | [logstash-filter-split](https://github.com/logstash-plugins/logstash-filter-split) |
|
||||
| [syslog_pri](/reference/plugins-filters-syslog_pri.md) | Parses the `PRI` (priority) field of a `syslog` message | [logstash-filter-syslog_pri](https://github.com/logstash-plugins/logstash-filter-syslog_pri) |
|
||||
| [threats_classifier](/reference/plugins-filters-threats_classifier.md) | Enriches security logs with information about the attacker’s intent | [logstash-filter-threats_classifier](https://github.com/empow/logstash-filter-threats_classifier) |
|
||||
| [throttle](/reference/plugins-filters-throttle.md) | Throttles the number of events | [logstash-filter-throttle](https://github.com/logstash-plugins/logstash-filter-throttle) |
|
||||
| [tld](/reference/plugins-filters-tld.md) | Replaces the contents of the default message field with whatever you specify in the configuration | [logstash-filter-tld](https://github.com/logstash-plugins/logstash-filter-tld) |
|
||||
| [translate](/reference/plugins-filters-translate.md) | Replaces field contents based on a hash or YAML file | [logstash-filter-translate](https://github.com/logstash-plugins/logstash-filter-translate) |
|
||||
| [truncate](/reference/plugins-filters-truncate.md) | Truncates fields longer than a given length | [logstash-filter-truncate](https://github.com/logstash-plugins/logstash-filter-truncate) |
|
||||
| [urldecode](/reference/plugins-filters-urldecode.md) | Decodes URL-encoded fields | [logstash-filter-urldecode](https://github.com/logstash-plugins/logstash-filter-urldecode) |
|
||||
| [useragent](/reference/plugins-filters-useragent.md) | Parses user agent strings into fields | [logstash-filter-useragent](https://github.com/logstash-plugins/logstash-filter-useragent) |
|
||||
| [uuid](/reference/plugins-filters-uuid.md) | Adds a UUID to events | [logstash-filter-uuid](https://github.com/logstash-plugins/logstash-filter-uuid) |
|
||||
| [wurfl_device_detection](/reference/plugins-filters-wurfl_device_detection.md) | Enriches logs with device information such as brand, model, OS | [logstash-filter-wurfl_device_detection](https://github.com/WURFL/logstash-filter-wurfl_device_detection) |
|
||||
| [xml](/reference/plugins-filters-xml.md) | Parses XML into fields | [logstash-filter-xml](https://github.com/logstash-plugins/logstash-filter-xml) |
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -17,7 +17,7 @@ You use inputs to get data into Logstash. Some of the more commonly-used inputs
|
|||
* **redis**: reads from a redis server, using both redis channels and redis lists. Redis is often used as a "broker" in a centralized Logstash installation, which queues Logstash events from remote Logstash "shippers".
|
||||
* **beats**: processes events sent by [Beats](https://www.elastic.co/downloads/beats).
|
||||
|
||||
For more information about the available inputs, see [Input Plugins](/reference/input-plugins.md).
|
||||
For more information about the available inputs, see [Input Plugins](logstash-docs-md://lsr/input-plugins.md).
|
||||
|
||||
|
||||
## Filters [_filters]
|
||||
|
@ -30,7 +30,7 @@ Filters are intermediary processing devices in the Logstash pipeline. You can co
|
|||
* **clone**: make a copy of an event, possibly adding or removing fields.
|
||||
* **geoip**: add information about geographical location of IP addresses (also displays amazing charts in Kibana!)
|
||||
|
||||
For more information about the available filters, see [Filter Plugins](/reference/filter-plugins.md).
|
||||
For more information about the available filters, see [Filter Plugins](logstash-docs-md://lsr/filter-plugins.md).
|
||||
|
||||
|
||||
## Outputs [_outputs]
|
||||
|
@ -42,7 +42,7 @@ Outputs are the final phase of the Logstash pipeline. An event can pass through
|
|||
* **graphite**: send event data to graphite, a popular open source tool for storing and graphing metrics. [http://graphite.readthedocs.io/en/latest/](http://graphite.readthedocs.io/en/latest/)
|
||||
* **statsd**: send event data to statsd, a service that "listens for statistics, like counters and timers, sent over UDP and sends aggregates to one or more pluggable backend services". If you’re already using statsd, this could be useful for you!
|
||||
|
||||
For more information about the available outputs, see [Output Plugins](/reference/output-plugins.md).
|
||||
For more information about the available outputs, see [Output Plugins](logstash-docs-md://lsr/output-plugins.md).
|
||||
|
||||
|
||||
## Codecs [_codecs]
|
||||
|
@ -52,7 +52,7 @@ Codecs are basically stream filters that can operate as part of an input or outp
|
|||
* **json**: encode or decode data in the JSON format.
|
||||
* **multiline**: merge multiple-line text events such as java exception and stacktrace messages into a single event.
|
||||
|
||||
For more information about the available codecs, see [Codec Plugins](/reference/codec-plugins.md).
|
||||
For more information about the available codecs, see [Codec Plugins](logstash-docs-md://lsr/codec-plugins.md).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -16,11 +16,11 @@ Any type of event can be enriched and transformed with a broad array of input, f
|
|||
Logstash accelerates your insights by harnessing a greater volume and variety of data.
|
||||
|
||||
::::{admonition} {{ls}} to {{serverless-full}}
|
||||
You’ll use the {{ls}} [{{es}} output plugin](/reference/plugins-outputs-elasticsearch.md) to send data to {{serverless-full}}.
|
||||
You’ll use the {{ls}} [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) to send data to {{serverless-full}}.
|
||||
Note these differences between {{es-serverless}} and both {{ess}} and self-managed {{es}}:
|
||||
|
||||
* Use **API keys** to access {{serverless-full}} from {{ls}}. Any user-based security settings in your in your [{{es}} output plugin](/reference/plugins-outputs-elasticsearch.md) configuration are ignored and may cause errors.
|
||||
* {{serverless-full}} uses **data streams** and [{{dlm}} ({{dlm-init}})](docs-content://manage-data/lifecycle/data-stream.md) instead of {{ilm}} ({{ilm-init}}). Any {{ilm-init}} settings in your [{{es}} output plugin](/reference/plugins-outputs-elasticsearch.md) configuration are ignored and may cause errors.
|
||||
* Use **API keys** to access {{serverless-full}} from {{ls}}. Any user-based security settings in your in your [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) configuration are ignored and may cause errors.
|
||||
* {{serverless-full}} uses **data streams** and [{{dlm}} ({{dlm-init}})](docs-content://manage-data/lifecycle/data-stream.md) instead of {{ilm}} ({{ilm-init}}). Any {{ilm-init}} settings in your [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) configuration are ignored and may cause errors.
|
||||
* **{{ls}} monitoring** is available through the [{{ls}} Integration](https://github.com/elastic/integrations/blob/main/packages/logstash/_dev/build/docs/README.md) in [Elastic Observability](docs-content://solutions/observability.md) on {{serverless-full}}.
|
||||
|
||||
**Known issue for Logstash to Elasticsearch Serverless.**
|
||||
|
|
|
@ -1,129 +0,0 @@
|
|||
---
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/input-plugins.html
|
||||
---
|
||||
|
||||
# Input plugins [input-plugins]
|
||||
|
||||
An input plugin enables a specific source of events to be read by Logstash.
|
||||
|
||||
The following input plugins are available below. For a list of Elastic supported plugins, please consult the [Support Matrix](https://www.elastic.co/support/matrix#show_logstash_plugins).
|
||||
|
||||
| | | |
|
||||
| --- | --- | --- |
|
||||
| Plugin | Description | Github repository |
|
||||
| [azure_event_hubs](/reference/plugins-inputs-azure_event_hubs.md) | Receives events from Azure Event Hubs | [azure_event_hubs](https://github.com/logstash-plugins/logstash-input-azure_event_hubs) |
|
||||
| [beats](/reference/plugins-inputs-beats.md) | Receives events from the Elastic Beats framework | [logstash-input-beats](https://github.com/logstash-plugins/logstash-input-beats) |
|
||||
| [cloudwatch](/reference/plugins-inputs-cloudwatch.md) | Pulls events from the Amazon Web Services CloudWatch API | [logstash-input-cloudwatch](https://github.com/logstash-plugins/logstash-input-cloudwatch) |
|
||||
| [couchdb_changes](/reference/plugins-inputs-couchdb_changes.md) | Streams events from CouchDB’s `_changes` URI | [logstash-input-couchdb_changes](https://github.com/logstash-plugins/logstash-input-couchdb_changes) |
|
||||
| [dead_letter_queue](/reference/plugins-inputs-dead_letter_queue.md) | read events from Logstash’s dead letter queue | [logstash-input-dead_letter_queue](https://github.com/logstash-plugins/logstash-input-dead_letter_queue) |
|
||||
| [elastic_agent](/reference/plugins-inputs-elastic_agent.md) | Receives events from the Elastic Agent framework | [logstash-input-beats](https://github.com/logstash-plugins/logstash-input-beats) (shared) |
|
||||
| [elastic_serverless_forwarder](/reference/plugins-inputs-elastic_serverless_forwarder.md) | Accepts events from Elastic Serverless Forwarder | [logstash-input-elastic_serverless_forwarder](https://github.com/logstash-plugins/logstash-input-elastic_serverless_forwarder) |
|
||||
| [elasticsearch](/reference/plugins-inputs-elasticsearch.md) | Reads query results from an Elasticsearch cluster | [logstash-input-elasticsearch](https://github.com/logstash-plugins/logstash-input-elasticsearch) |
|
||||
| [exec](/reference/plugins-inputs-exec.md) | Captures the output of a shell command as an event | [logstash-input-exec](https://github.com/logstash-plugins/logstash-input-exec) |
|
||||
| [file](/reference/plugins-inputs-file.md) | Streams events from files | [logstash-input-file](https://github.com/logstash-plugins/logstash-input-file) |
|
||||
| [ganglia](/reference/plugins-inputs-ganglia.md) | Reads Ganglia packets over UDP | [logstash-input-ganglia](https://github.com/logstash-plugins/logstash-input-ganglia) |
|
||||
| [gelf](/reference/plugins-inputs-gelf.md) | Reads GELF-format messages from Graylog2 as events | [logstash-input-gelf](https://github.com/logstash-plugins/logstash-input-gelf) |
|
||||
| [generator](/reference/plugins-inputs-generator.md) | Generates random log events for test purposes | [logstash-input-generator](https://github.com/logstash-plugins/logstash-input-generator) |
|
||||
| [github](/reference/plugins-inputs-github.md) | Reads events from a GitHub webhook | [logstash-input-github](https://github.com/logstash-plugins/logstash-input-github) |
|
||||
| [google_cloud_storage](/reference/plugins-inputs-google_cloud_storage.md) | Extract events from files in a Google Cloud Storage bucket | [logstash-input-google_cloud_storage](https://github.com/logstash-plugins/logstash-input-google_cloud_storage) |
|
||||
| [google_pubsub](/reference/plugins-inputs-google_pubsub.md) | Consume events from a Google Cloud PubSub service | [logstash-input-google_pubsub](https://github.com/logstash-plugins/logstash-input-google_pubsub) |
|
||||
| [graphite](/reference/plugins-inputs-graphite.md) | Reads metrics from the `graphite` tool | [logstash-input-graphite](https://github.com/logstash-plugins/logstash-input-graphite) |
|
||||
| [heartbeat](/reference/plugins-inputs-heartbeat.md) | Generates heartbeat events for testing | [logstash-input-heartbeat](https://github.com/logstash-plugins/logstash-input-heartbeat) |
|
||||
| [http](/reference/plugins-inputs-http.md) | Receives events over HTTP or HTTPS | [logstash-input-http](https://github.com/logstash-plugins/logstash-input-http) |
|
||||
| [http_poller](/reference/plugins-inputs-http_poller.md) | Decodes the output of an HTTP API into events | [logstash-input-http_poller](https://github.com/logstash-plugins/logstash-input-http_poller) |
|
||||
| [imap](/reference/plugins-inputs-imap.md) | Reads mail from an IMAP server | [logstash-input-imap](https://github.com/logstash-plugins/logstash-input-imap) |
|
||||
| [irc](/reference/plugins-inputs-irc.md) | Reads events from an IRC server | [logstash-input-irc](https://github.com/logstash-plugins/logstash-input-irc) |
|
||||
| [java_generator](/reference/plugins-inputs-java_generator.md) | Generates synthetic log events | [core plugin](https://github.com/elastic/logstash/blob/master/logstash-core/src/main/java/org/logstash/plugins/inputs/Generator.java) |
|
||||
| [java_stdin](/reference/plugins-inputs-java_stdin.md) | Reads events from standard input | [core plugin](https://github.com/elastic/logstash/blob/master/logstash-core/src/main/java/org/logstash/plugins/inputs/Stdin.java) |
|
||||
| [jdbc](/reference/plugins-inputs-jdbc.md) | Creates events from JDBC data | [logstash-integration-jdbc](https://github.com/logstash-plugins/logstash-integration-jdbc) |
|
||||
| [jms](/reference/plugins-inputs-jms.md) | Reads events from a Jms Broker | [logstash-input-jms](https://github.com/logstash-plugins/logstash-input-jms) |
|
||||
| [jmx](/reference/plugins-inputs-jmx.md) | Retrieves metrics from remote Java applications over JMX | [logstash-input-jmx](https://github.com/logstash-plugins/logstash-input-jmx) |
|
||||
| [kafka](/reference/plugins-inputs-kafka.md) | Reads events from a Kafka topic | [logstash-integration-kafka](https://github.com/logstash-plugins/logstash-integration-kafka) |
|
||||
| [kinesis](/reference/plugins-inputs-kinesis.md) | Receives events through an AWS Kinesis stream | [logstash-input-kinesis](https://github.com/logstash-plugins/logstash-input-kinesis) |
|
||||
| [logstash](/reference/plugins-inputs-logstash.md) | Reads from {{ls}} output of another {{ls}} instance | [logstash-integration-logstash](https://github.com/logstash-plugins/logstash-integration-logstash) |
|
||||
| [log4j](/reference/plugins-inputs-log4j.md) | Reads events over a TCP socket from a Log4j `SocketAppender` object | [logstash-input-log4j](https://github.com/logstash-plugins/logstash-input-log4j) |
|
||||
| [lumberjack](/reference/plugins-inputs-lumberjack.md) | Receives events using the Lumberjack protocl | [logstash-input-lumberjack](https://github.com/logstash-plugins/logstash-input-lumberjack) |
|
||||
| [meetup](/reference/plugins-inputs-meetup.md) | Captures the output of command line tools as an event | [logstash-input-meetup](https://github.com/logstash-plugins/logstash-input-meetup) |
|
||||
| [pipe](/reference/plugins-inputs-pipe.md) | Streams events from a long-running command pipe | [logstash-input-pipe](https://github.com/logstash-plugins/logstash-input-pipe) |
|
||||
| [puppet_facter](/reference/plugins-inputs-puppet_facter.md) | Receives facts from a Puppet server | [logstash-input-puppet_facter](https://github.com/logstash-plugins/logstash-input-puppet_facter) |
|
||||
| [rabbitmq](/reference/plugins-inputs-rabbitmq.md) | Pulls events from a RabbitMQ exchange | [logstash-integration-rabbitmq](https://github.com/logstash-plugins/logstash-integration-rabbitmq) |
|
||||
| [redis](/reference/plugins-inputs-redis.md) | Reads events from a Redis instance | [logstash-input-redis](https://github.com/logstash-plugins/logstash-input-redis) |
|
||||
| [relp](/reference/plugins-inputs-relp.md) | Receives RELP events over a TCP socket | [logstash-input-relp](https://github.com/logstash-plugins/logstash-input-relp) |
|
||||
| [rss](/reference/plugins-inputs-rss.md) | Captures the output of command line tools as an event | [logstash-input-rss](https://github.com/logstash-plugins/logstash-input-rss) |
|
||||
| [s3](/reference/plugins-inputs-s3.md) | Streams events from files in a S3 bucket | [logstash-input-s3](https://github.com/logstash-plugins/logstash-input-s3) |
|
||||
| [s3-sns-sqs](/reference/plugins-inputs-s3-sns-sqs.md) | Reads logs from AWS S3 buckets using sqs | [logstash-input-s3-sns-sqs](https://github.com/cherweg/logstash-input-s3-sns-sqs) |
|
||||
| [salesforce](/reference/plugins-inputs-salesforce.md) | Creates events based on a Salesforce SOQL query | [logstash-input-salesforce](https://github.com/logstash-plugins/logstash-input-salesforce) |
|
||||
| [snmp](/reference/plugins-inputs-snmp.md) | Polls network devices using Simple Network Management Protocol (SNMP) | [logstash-integration-snmp](https://github.com/logstash-plugins/logstash-integration-snmp) |
|
||||
| [snmptrap](/reference/plugins-inputs-snmptrap.md) | Creates events based on SNMP trap messages | [logstash-integration-snmp](https://github.com/logstash-plugins/logstash-integration-snmp) |
|
||||
| [sqlite](/reference/plugins-inputs-sqlite.md) | Creates events based on rows in an SQLite database | [logstash-input-sqlite](https://github.com/logstash-plugins/logstash-input-sqlite) |
|
||||
| [sqs](/reference/plugins-inputs-sqs.md) | Pulls events from an Amazon Web Services Simple Queue Service queue | [logstash-input-sqs](https://github.com/logstash-plugins/logstash-input-sqs) |
|
||||
| [stdin](/reference/plugins-inputs-stdin.md) | Reads events from standard input | [logstash-input-stdin](https://github.com/logstash-plugins/logstash-input-stdin) |
|
||||
| [stomp](/reference/plugins-inputs-stomp.md) | Creates events received with the STOMP protocol | [logstash-input-stomp](https://github.com/logstash-plugins/logstash-input-stomp) |
|
||||
| [syslog](/reference/plugins-inputs-syslog.md) | Reads syslog messages as events | [logstash-input-syslog](https://github.com/logstash-plugins/logstash-input-syslog) |
|
||||
| [tcp](/reference/plugins-inputs-tcp.md) | Reads events from a TCP socket | [logstash-input-tcp](https://github.com/logstash-plugins/logstash-input-tcp) |
|
||||
| [twitter](/reference/plugins-inputs-twitter.md) | Reads events from the Twitter Streaming API | [logstash-input-twitter](https://github.com/logstash-plugins/logstash-input-twitter) |
|
||||
| [udp](/reference/plugins-inputs-udp.md) | Reads events over UDP | [logstash-input-udp](https://github.com/logstash-plugins/logstash-input-udp) |
|
||||
| [unix](/reference/plugins-inputs-unix.md) | Reads events over a UNIX socket | [logstash-input-unix](https://github.com/logstash-plugins/logstash-input-unix) |
|
||||
| [varnishlog](/reference/plugins-inputs-varnishlog.md) | Reads from the `varnish` cache shared memory log | [logstash-input-varnishlog](https://github.com/logstash-plugins/logstash-input-varnishlog) |
|
||||
| [websocket](/reference/plugins-inputs-websocket.md) | Reads events from a websocket | [logstash-input-websocket](https://github.com/logstash-plugins/logstash-input-websocket) |
|
||||
| [wmi](/reference/plugins-inputs-wmi.md) | Creates events based on the results of a WMI query | [logstash-input-wmi](https://github.com/logstash-plugins/logstash-input-wmi) |
|
||||
| [xmpp](/reference/plugins-inputs-xmpp.md) | Receives events over the XMPP/Jabber protocol | [logstash-input-xmpp](https://github.com/logstash-plugins/logstash-input-xmpp) |
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,27 +0,0 @@
|
|||
---
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugin-integrations.html
|
||||
---
|
||||
|
||||
# Integration plugins [plugin-integrations]
|
||||
|
||||
Integration plugins combine related plugins—inputs, outputs, and sometimes filters and codecs—into one package.
|
||||
|
||||
| | | |
|
||||
| --- | --- | --- |
|
||||
| Integration Plugin | Description | Github repository |
|
||||
| [aws](/reference/plugins-integrations-aws.md) | Plugins for use with Amazon Web Services (AWS). | [logstash-integration-aws](https://github.com/logstash-plugins/logstash-integration-aws) |
|
||||
| [elastic_enterprise_search (deprecated) ](/reference/plugins-integrations-elastic_enterprise_search.md) | [deprecated at {{stack}} version 9.0.0 and plugin version 3.0.1] Plugins for use with Elastic Enterprise Search. | [logstash-integration-elastic_enterprise_search](https://github.com/logstash-plugins/logstash-integration-elastic_enterprise_search) |
|
||||
| [jdbc](/reference/plugins-integrations-jdbc.md) | Plugins for use with databases that provide JDBC drivers. | [logstash-integration-jdbc](https://github.com/logstash-plugins/logstash-integration-jdbc) |
|
||||
| [kafka](/reference/plugins-integrations-kafka.md) | Plugins for use with the Kafka distributed streaming platform. | [logstash-integration-kafka](https://github.com/logstash-plugins/logstash-integration-kafka) |
|
||||
| [logstash](/reference/plugins-integrations-logstash.md) | Plugins to enable {{ls}}-to-{{ls}} communication. | [logstash-integration-logstash](https://github.com/logstash-plugins/logstash-integration-logstash) |
|
||||
| [rabbitmq](/reference/plugins-integrations-rabbitmq.md) | Plugins for processing events to or from a RabbitMQ broker. | [logstash-integration-rabbitmq](https://github.com/logstash-plugins/logstash-integration-rabbitmq) |
|
||||
| [snmp](/reference/plugins-integrations-snmp.md) | Plugins for polling devices using Simple Network Management Protocol (SNMP) or creating events from SNMPtrap messages. | [logstash-integration-snmp](https://github.com/logstash-plugins/logstash-integration-snmp) |
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -87,7 +87,7 @@ Performance should not be noticeably affected if you switch between `direct` and
|
|||
|
||||
### Memory sizing [memory-size-calculation]
|
||||
|
||||
Total JVM memory allocation must be estimated and is controlled indirectly using Java heap and direct memory settings. By default, a JVM’s off-heap direct memory limit is the same as the heap size. Check out [beats input memory usage](/reference/plugins-inputs-beats.md#plugins-inputs-beats-memory). Consider setting `-XX:MaxDirectMemorySize` to half of the heap size or any value that can accommodate the load you expect these plugins to handle.
|
||||
Total JVM memory allocation must be estimated and is controlled indirectly using Java heap and direct memory settings. By default, a JVM’s off-heap direct memory limit is the same as the heap size. Check out [beats input memory usage](logstash-docs-md://lsr/plugins-inputs-beats.md#plugins-inputs-beats-memory). Consider setting `-XX:MaxDirectMemorySize` to half of the heap size or any value that can accommodate the load you expect these plugins to handle.
|
||||
|
||||
As you make your capacity calculations, keep in mind that the JVM can’t consume the total amount of the host’s memory available, as the Operating System and other processes will require memory too.
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ These plugins can help you enrich data with additional info, such as GeoIP and u
|
|||
## Lookup plugins [lookup-plugins]
|
||||
|
||||
$$$dns-def$$$dns filter
|
||||
: The [dns filter plugin](/reference/plugins-filters-dns.md) performs a standard or reverse DNS lookup.
|
||||
: The [dns filter plugin](logstash-docs-md://lsr/plugins-filters-dns.md) performs a standard or reverse DNS lookup.
|
||||
|
||||
The following config performs a reverse lookup on the address in the `source_host` field and replaces it with the domain name:
|
||||
|
||||
|
@ -36,7 +36,7 @@ $$$dns-def$$$dns filter
|
|||
|
||||
|
||||
$$$es-def$$$elasticsearch filter
|
||||
: The [elasticsearch filter](/reference/plugins-filters-elasticsearch.md) copies fields from previous log events in Elasticsearch to current events.
|
||||
: The [elasticsearch filter](logstash-docs-md://lsr/plugins-filters-elasticsearch.md) copies fields from previous log events in Elasticsearch to current events.
|
||||
|
||||
The following config shows a complete example of how this filter might be used. Whenever Logstash receives an "end" event, it uses this Elasticsearch filter to find the matching "start" event based on some operation identifier. Then it copies the `@timestamp` field from the "start" event into a new field on the "end" event. Finally, using a combination of the date filter and the ruby filter, the code in the example calculates the time duration in hours between the two events.
|
||||
|
||||
|
@ -59,7 +59,7 @@ $$$es-def$$$elasticsearch filter
|
|||
|
||||
|
||||
$$$geoip-def$$$geoip filter
|
||||
: The [geoip filter](/reference/plugins-filters-geoip.md) adds geographical information about the location of IP addresses. For example:
|
||||
: The [geoip filter](logstash-docs-md://lsr/plugins-filters-geoip.md) adds geographical information about the location of IP addresses. For example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
|
@ -81,10 +81,10 @@ $$$geoip-def$$$geoip filter
|
|||
|
||||
|
||||
$$$http-def$$$http filter
|
||||
: The [http filter](/reference/plugins-filters-http.md) integrates with external web services/REST APIs, and enables lookup enrichment against any HTTP service or endpoint. This plugin is well suited for many enrichment use cases, such as social APIs, sentiment APIs, security feed APIs, and business service APIs.
|
||||
: The [http filter](logstash-docs-md://lsr/plugins-filters-http.md) integrates with external web services/REST APIs, and enables lookup enrichment against any HTTP service or endpoint. This plugin is well suited for many enrichment use cases, such as social APIs, sentiment APIs, security feed APIs, and business service APIs.
|
||||
|
||||
$$$jdbc-static-def$$$jdbc_static filter
|
||||
: The [jdbc_static filter](/reference/plugins-filters-jdbc_static.md) enriches events with data pre-loaded from a remote database.
|
||||
: The [jdbc_static filter](logstash-docs-md://lsr/plugins-filters-jdbc_static.md) enriches events with data pre-loaded from a remote database.
|
||||
|
||||
The following example fetches data from a remote database, caches it in a local database, and uses lookups to enrich events with data cached in the local database.
|
||||
|
||||
|
@ -158,7 +158,7 @@ $$$jdbc-static-def$$$jdbc_static filter
|
|||
|
||||
|
||||
$$$jdbc-stream-def$$$jdbc_streaming filter
|
||||
: The [jdbc_streaming filter](/reference/plugins-filters-jdbc_streaming.md) enriches events with database data.
|
||||
: The [jdbc_streaming filter](logstash-docs-md://lsr/plugins-filters-jdbc_streaming.md) enriches events with database data.
|
||||
|
||||
The following example executes a SQL query and stores the result set in a field called `country_details`:
|
||||
|
||||
|
@ -179,10 +179,10 @@ $$$jdbc-stream-def$$$jdbc_streaming filter
|
|||
|
||||
|
||||
$$$memcached-def$$$memcached filter
|
||||
: The [memcached filter](/reference/plugins-filters-memcached.md) enables key/value lookup enrichment against a Memcached object caching system. It supports both read (GET) and write (SET) operations. It is a notable addition for security analytics use cases.
|
||||
: The [memcached filter](logstash-docs-md://lsr/plugins-filters-memcached.md) enables key/value lookup enrichment against a Memcached object caching system. It supports both read (GET) and write (SET) operations. It is a notable addition for security analytics use cases.
|
||||
|
||||
$$$translate-def$$$translate filter
|
||||
: The [translate filter](/reference/plugins-filters-translate.md) replaces field contents based on replacement values specified in a hash or file. Currently supports these file types: YAML, JSON, and CSV.
|
||||
: The [translate filter](logstash-docs-md://lsr/plugins-filters-translate.md) replaces field contents based on replacement values specified in a hash or file. Currently supports these file types: YAML, JSON, and CSV.
|
||||
|
||||
The following example takes the value of the `response_code` field, translates it to a description based on the values specified in the dictionary, and then removes the `response_code` field from the event:
|
||||
|
||||
|
@ -204,7 +204,7 @@ $$$translate-def$$$translate filter
|
|||
|
||||
|
||||
$$$useragent-def$$$useragent filter
|
||||
: The [useragent filter](/reference/plugins-filters-useragent.md) parses user agent strings into fields.
|
||||
: The [useragent filter](logstash-docs-md://lsr/plugins-filters-useragent.md) parses user agent strings into fields.
|
||||
|
||||
The following example takes the user agent string in the `agent` field, parses it into user agent fields, and adds the user agent fields to a new field called `user_agent`. It also removes the original `agent` field:
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ mapped_pages:
|
|||
|
||||
# Logstash-to-Logstash: HTTP output to HTTP input [ls-to-ls-http]
|
||||
|
||||
HTTP output to HTTP input is an alternative to the Lumberjack output to Beats input approach for Logstash-to-Logstash communication. This approach relies on the use of [http output](/reference/plugins-outputs-http.md) to [http input](/reference/plugins-inputs-http.md) plugins.
|
||||
HTTP output to HTTP input is an alternative to the Lumberjack output to Beats input approach for Logstash-to-Logstash communication. This approach relies on the use of [http output](logstash-docs-md://lsr/plugins-outputs-http.md) to [http input](logstash-docs-md://lsr/plugins-inputs-http.md) plugins.
|
||||
|
||||
::::{note}
|
||||
{{ls}}-to-{{ls}} using HTTP input/output plugins is now being deprecated in favor of [Logstash-to-Logstash: Output to Input](/reference/ls-to-ls-native.md).
|
||||
|
|
|
@ -26,7 +26,7 @@ Monitoring {{ls}} with legacy collection uses these components:
|
|||
* [Collectors](#logstash-monitoring-collectors-legacy)
|
||||
* [Output](#logstash-monitoring-output-legacy)
|
||||
|
||||
These pieces live outside of the default Logstash pipeline in a dedicated monitoring pipeline. This configuration ensures that all data and processing has a minimal impact on ordinary Logstash processing. Existing Logstash features, such as the [`elasticsearch` output](/reference/plugins-outputs-elasticsearch.md), can be reused to benefit from its retry policies.
|
||||
These pieces live outside of the default Logstash pipeline in a dedicated monitoring pipeline. This configuration ensures that all data and processing has a minimal impact on ordinary Logstash processing. Existing Logstash features, such as the [`elasticsearch` output](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md), can be reused to benefit from its retry policies.
|
||||
|
||||
::::{note}
|
||||
The `elasticsearch` output that is used for monitoring {{ls}} is configured exclusively through settings found in `logstash.yml`. It is not configured by using anything from the Logstash configurations that might also be using their own separate `elasticsearch` outputs.
|
||||
|
|
|
@ -31,9 +31,9 @@ monitoring.enabled: false
|
|||
|
||||
## Determine target Elasticsearch cluster [define-cluster__uuid]
|
||||
|
||||
You will need to determine which Elasticsearch cluster that {{ls}} will bind metrics to in the Stack Monitoring UI by specifying the `cluster_uuid`. When pipelines contain [{{es}} output plugins](/reference/plugins-outputs-elasticsearch.md), the `cluster_uuid` is automatically calculated, and the metrics should be bound without any additional settings.
|
||||
You will need to determine which Elasticsearch cluster that {{ls}} will bind metrics to in the Stack Monitoring UI by specifying the `cluster_uuid`. When pipelines contain [{{es}} output plugins](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md), the `cluster_uuid` is automatically calculated, and the metrics should be bound without any additional settings.
|
||||
|
||||
To override automatic values, or if your pipeline does not contain any [{{es}} output plugins](/reference/plugins-outputs-elasticsearch.md), you can bind the metrics of {{ls}} to a specific cluster, by defining the target cluster in the `monitoring.cluster_uuid` setting. in the configuration file (logstash.yml):
|
||||
To override automatic values, or if your pipeline does not contain any [{{es}} output plugins](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md), you can bind the metrics of {{ls}} to a specific cluster, by defining the target cluster in the `monitoring.cluster_uuid` setting. in the configuration file (logstash.yml):
|
||||
|
||||
```yaml
|
||||
monitoring.cluster_uuid: PRODUCTION_ES_CLUSTER_UUID
|
||||
|
|
|
@ -9,19 +9,19 @@ Several use cases generate events that span multiple lines of text. In order to
|
|||
|
||||
Multiline event processing is complex and relies on proper event ordering. The best way to guarantee ordered log processing is to implement the processing as early in the pipeline as possible.
|
||||
|
||||
The [multiline](/reference/plugins-codecs-multiline.md) codec is the preferred tool for handling multiline events in the Logstash pipeline. The multiline codec merges lines from a single input using a simple set of rules.
|
||||
The [multiline](logstash-docs-md://lsr/plugins-codecs-multiline.md) codec is the preferred tool for handling multiline events in the Logstash pipeline. The multiline codec merges lines from a single input using a simple set of rules.
|
||||
|
||||
::::{important}
|
||||
If you are using a Logstash input plugin that supports multiple hosts, such as the [beats](/reference/plugins-inputs-beats.md) input plugin, you should not use the [multiline](/reference/plugins-codecs-multiline.md) codec to handle multiline events. Doing so may result in the mixing of streams and corrupted event data. In this situation, you need to handle multiline events before sending the event data to Logstash.
|
||||
If you are using a Logstash input plugin that supports multiple hosts, such as the [beats](logstash-docs-md://lsr/plugins-inputs-beats.md) input plugin, you should not use the [multiline](logstash-docs-md://lsr/plugins-codecs-multiline.md) codec to handle multiline events. Doing so may result in the mixing of streams and corrupted event data. In this situation, you need to handle multiline events before sending the event data to Logstash.
|
||||
::::
|
||||
|
||||
|
||||
The most important aspects of configuring the multiline codec are the following:
|
||||
|
||||
* The `pattern` option specifies a regular expression. Lines that match the specified regular expression are considered either continuations of a previous line or the start of a new multiline event. You can use [grok](/reference/plugins-filters-grok.md) regular expression templates with this configuration option.
|
||||
* The `pattern` option specifies a regular expression. Lines that match the specified regular expression are considered either continuations of a previous line or the start of a new multiline event. You can use [grok](logstash-docs-md://lsr/plugins-filters-grok.md) regular expression templates with this configuration option.
|
||||
* The `what` option takes two values: `previous` or `next`. The `previous` value specifies that lines that match the value in the `pattern` option are part of the previous line. The `next` value specifies that lines that match the value in the `pattern` option are part of the following line.* The `negate` option applies the multiline codec to lines that *do not* match the regular expression specified in the `pattern` option.
|
||||
|
||||
See the full documentation for the [multiline](/reference/plugins-codecs-multiline.md) codec plugin for more information on configuration options.
|
||||
See the full documentation for the [multiline](logstash-docs-md://lsr/plugins-codecs-multiline.md) codec plugin for more information on configuration options.
|
||||
|
||||
## Examples of Multiline Codec Configuration [_examples_of_multiline_codec_configuration]
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ In this section, you create a Logstash pipeline that takes input from a Twitter
|
|||
|
||||
## Reading from a Twitter Feed [twitter-configuration]
|
||||
|
||||
To add a Twitter feed, you use the [`twitter`](/reference/plugins-inputs-twitter.md) input plugin. To configure the plugin, you need several pieces of information:
|
||||
To add a Twitter feed, you use the [`twitter`](logstash-docs-md://lsr/plugins-inputs-twitter.md) input plugin. To configure the plugin, you need several pieces of information:
|
||||
|
||||
* A *consumer key*, which uniquely identifies your Twitter app.
|
||||
* A *consumer secret*, which serves as the password for your Twitter app.
|
||||
|
@ -20,7 +20,7 @@ To add a Twitter feed, you use the [`twitter`](/reference/plugins-inputs-twitter
|
|||
* An *oauth token*, which identifies the Twitter account using this app.
|
||||
* An *oauth token secret*, which serves as the password of the Twitter account.
|
||||
|
||||
Visit [https://dev.twitter.com/apps](https://dev.twitter.com/apps) to set up a Twitter account and generate your consumer key and secret, as well as your access token and secret. See the docs for the [`twitter`](/reference/plugins-inputs-twitter.md) input plugin if you’re not sure how to generate these keys.
|
||||
Visit [https://dev.twitter.com/apps](https://dev.twitter.com/apps) to set up a Twitter account and generate your consumer key and secret, as well as your access token and secret. See the docs for the [`twitter`](logstash-docs-md://lsr/plugins-inputs-twitter.md) input plugin if you’re not sure how to generate these keys.
|
||||
|
||||
Like you did earlier when you worked on [Parsing Logs with Logstash](/reference/advanced-pipeline.md), create a config file (called `second-pipeline.conf`) that contains the skeleton of a configuration pipeline. If you want, you can reuse the file you created earlier, but make sure you pass in the correct config file name when you run Logstash.
|
||||
|
||||
|
@ -73,7 +73,7 @@ Configure your Logstash instance to use the Filebeat input plugin by adding the
|
|||
|
||||
## Writing Logstash Data to a File [logstash-file-output]
|
||||
|
||||
You can configure your Logstash pipeline to write data directly to a file with the [`file`](/reference/plugins-outputs-file.md) output plugin.
|
||||
You can configure your Logstash pipeline to write data directly to a file with the [`file`](logstash-docs-md://lsr/plugins-outputs-file.md) output plugin.
|
||||
|
||||
Configure your Logstash instance to use the `file` output plugin by adding the following lines to the `output` section of the `second-pipeline.conf` file:
|
||||
|
||||
|
|
|
@ -1,133 +0,0 @@
|
|||
---
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/output-plugins.html
|
||||
---
|
||||
|
||||
# Output plugins [output-plugins]
|
||||
|
||||
An output plugin sends event data to a particular destination. Outputs are the final stage in the event pipeline.
|
||||
|
||||
The following output plugins are available below. For a list of Elastic supported plugins, please consult the [Support Matrix](https://www.elastic.co/support/matrix#show_logstash_plugins).
|
||||
|
||||
| | | |
|
||||
| --- | --- | --- |
|
||||
| Plugin | Description | Github repository |
|
||||
| [app_search (deprecated)](/reference/plugins-outputs-elastic_app_search.md) | [deprecated at {{stack}} version 9.0.0 and plugin version 3.0.1] Sends events to Elastic App Search | [logstash-integration-elastic_enterprise_search](https://github.com/logstash-plugins/logstash-output-elastic_app_search) |
|
||||
| [boundary](/reference/plugins-outputs-boundary.md) | Sends annotations to Boundary based on Logstash events | [logstash-output-boundary](https://github.com/logstash-plugins/logstash-output-boundary) |
|
||||
| [circonus](/reference/plugins-outputs-circonus.md) | Sends annotations to Circonus based on Logstash events | [logstash-output-circonus](https://github.com/logstash-plugins/logstash-output-circonus) |
|
||||
| [cloudwatch](/reference/plugins-outputs-cloudwatch.md) | Aggregates and sends metric data to AWS CloudWatch | [logstash-output-cloudwatch](https://github.com/logstash-plugins/logstash-output-cloudwatch) |
|
||||
| [csv](/reference/plugins-outputs-csv.md) | Writes events to disk in a delimited format | [logstash-output-csv](https://github.com/logstash-plugins/logstash-output-csv) |
|
||||
| [datadog](/reference/plugins-outputs-datadog.md) | Sends events to DataDogHQ based on Logstash events | [logstash-output-datadog](https://github.com/logstash-plugins/logstash-output-datadog) |
|
||||
| [datadog_metrics](/reference/plugins-outputs-datadog_metrics.md) | Sends metrics to DataDogHQ based on Logstash events | [logstash-output-datadog_metrics](https://github.com/logstash-plugins/logstash-output-datadog_metrics) |
|
||||
| [dynatrace](/reference/plugins-outputs-dynatrace.md) | Sends events to Dynatrace based on Logstash events | [logstash-output-dynatrace](https://github.com/dynatrace-oss/logstash-output-dynatrace) |
|
||||
| [elastic_app_search (deprecated)](/reference/plugins-outputs-elastic_app_search.md) | [deprecated at {{stack}} version 9.0.0 and plugin version 3.0.1]Sends events to the [Elastic App Search](https://www.elastic.co/app-search/) solution | [logstash-integration-elastic_enterprise_search](https://github.com/logstash-plugins/logstash-output-elastic_app_search) |
|
||||
| [elastic_workplace_search](/reference/plugins-outputs-elastic_workplace_search.md) | Sends events to the [Elastic Workplace Search](https://www.elastic.co/enterprise-search) solution | [logstash-integration-elastic_enterprise_search](https://github.com/logstash-plugins/logstash-output-elastic_app_search) |
|
||||
| [elasticsearch](/reference/plugins-outputs-elasticsearch.md) | Stores logs in Elasticsearch | [logstash-output-elasticsearch](https://github.com/logstash-plugins/logstash-output-elasticsearch) |
|
||||
| [email](/reference/plugins-outputs-email.md) | Sends email to a specified address when output is received | [logstash-output-email](https://github.com/logstash-plugins/logstash-output-email) |
|
||||
| [exec](/reference/plugins-outputs-exec.md) | Runs a command for a matching event | [logstash-output-exec](https://github.com/logstash-plugins/logstash-output-exec) |
|
||||
| [file](/reference/plugins-outputs-file.md) | Writes events to files on disk | [logstash-output-file](https://github.com/logstash-plugins/logstash-output-file) |
|
||||
| [ganglia](/reference/plugins-outputs-ganglia.md) | Writes metrics to Ganglia’s `gmond` | [logstash-output-ganglia](https://github.com/logstash-plugins/logstash-output-ganglia) |
|
||||
| [gelf](/reference/plugins-outputs-gelf.md) | Generates GELF formatted output for Graylog2 | [logstash-output-gelf](https://github.com/logstash-plugins/logstash-output-gelf) |
|
||||
| [google_bigquery](/reference/plugins-outputs-google_bigquery.md) | Writes events to Google BigQuery | [logstash-output-google_bigquery](https://github.com/logstash-plugins/logstash-output-google_bigquery) |
|
||||
| [google_cloud_storage](/reference/plugins-outputs-google_cloud_storage.md) | Uploads log events to Google Cloud Storage | [logstash-output-google_cloud_storage](https://github.com/logstash-plugins/logstash-output-google_cloud_storage) |
|
||||
| [google_pubsub](/reference/plugins-outputs-google_pubsub.md) | Uploads log events to Google Cloud Pubsub | [logstash-output-google_pubsub](https://github.com/logstash-plugins/logstash-output-google_pubsub) |
|
||||
| [graphite](/reference/plugins-outputs-graphite.md) | Writes metrics to Graphite | [logstash-output-graphite](https://github.com/logstash-plugins/logstash-output-graphite) |
|
||||
| [graphtastic](/reference/plugins-outputs-graphtastic.md) | Sends metric data on Windows | [logstash-output-graphtastic](https://github.com/logstash-plugins/logstash-output-graphtastic) |
|
||||
| [http](/reference/plugins-outputs-http.md) | Sends events to a generic HTTP or HTTPS endpoint | [logstash-output-http](https://github.com/logstash-plugins/logstash-output-http) |
|
||||
| [influxdb](/reference/plugins-outputs-influxdb.md) | Writes metrics to InfluxDB | [logstash-output-influxdb](https://github.com/logstash-plugins/logstash-output-influxdb) |
|
||||
| [irc](/reference/plugins-outputs-irc.md) | Writes events to IRC | [logstash-output-irc](https://github.com/logstash-plugins/logstash-output-irc) |
|
||||
| [java_stdout](/reference/plugins-outputs-java_stdout.md) | Prints events to the STDOUT of the shell | [core plugin](https://github.com/elastic/logstash/blob/master/logstash-core/src/main/java/org/logstash/plugins/outputs/Stdout.java) |
|
||||
| [juggernaut](/reference/plugins-outputs-juggernaut.md) | Pushes messages to the Juggernaut websockets server | [logstash-output-juggernaut](https://github.com/logstash-plugins/logstash-output-juggernaut) |
|
||||
| [kafka](/reference/plugins-outputs-kafka.md) | Writes events to a Kafka topic | [logstash-integration-kafka](https://github.com/logstash-plugins/logstash-integration-kafka) |
|
||||
| [librato](/reference/plugins-outputs-librato.md) | Sends metrics, annotations, and alerts to Librato based on Logstash events | [logstash-output-librato](https://github.com/logstash-plugins/logstash-output-librato) |
|
||||
| [loggly](/reference/plugins-outputs-loggly.md) | Ships logs to Loggly | [logstash-output-loggly](https://github.com/logstash-plugins/logstash-output-loggly) |
|
||||
| [logstash](/reference/plugins-outputs-logstash.md) | Ships data to {{ls}} input on another {{ls}} instance | [logstash-integration-logstash](https://github.com/logstash-plugins/logstash-integration-logstash) |
|
||||
| [lumberjack](/reference/plugins-outputs-lumberjack.md) | Sends events using the `lumberjack` protocol | [logstash-output-lumberjack](https://github.com/logstash-plugins/logstash-output-lumberjack) |
|
||||
| [metriccatcher](/reference/plugins-outputs-metriccatcher.md) | Writes metrics to MetricCatcher | [logstash-output-metriccatcher](https://github.com/logstash-plugins/logstash-output-metriccatcher) |
|
||||
| [mongodb](/reference/plugins-outputs-mongodb.md) | Writes events to MongoDB | [logstash-output-mongodb](https://github.com/logstash-plugins/logstash-output-mongodb) |
|
||||
| [nagios](/reference/plugins-outputs-nagios.md) | Sends passive check results to Nagios | [logstash-output-nagios](https://github.com/logstash-plugins/logstash-output-nagios) |
|
||||
| [nagios_nsca](/reference/plugins-outputs-nagios_nsca.md) | Sends passive check results to Nagios using the NSCA protocol | [logstash-output-nagios_nsca](https://github.com/logstash-plugins/logstash-output-nagios_nsca) |
|
||||
| [opentsdb](/reference/plugins-outputs-opentsdb.md) | Writes metrics to OpenTSDB | [logstash-output-opentsdb](https://github.com/logstash-plugins/logstash-output-opentsdb) |
|
||||
| [pagerduty](/reference/plugins-outputs-pagerduty.md) | Sends notifications based on preconfigured services and escalation policies | [logstash-output-pagerduty](https://github.com/logstash-plugins/logstash-output-pagerduty) |
|
||||
| [pipe](/reference/plugins-outputs-pipe.md) | Pipes events to another program’s standard input | [logstash-output-pipe](https://github.com/logstash-plugins/logstash-output-pipe) |
|
||||
| [rabbitmq](/reference/plugins-outputs-rabbitmq.md) | Pushes events to a RabbitMQ exchange | [logstash-integration-rabbitmq](https://github.com/logstash-plugins/logstash-integration-rabbitmq) |
|
||||
| [redis](/reference/plugins-outputs-redis.md) | Sends events to a Redis queue using the `RPUSH` command | [logstash-output-redis](https://github.com/logstash-plugins/logstash-output-redis) |
|
||||
| [redmine](/reference/plugins-outputs-redmine.md) | Creates tickets using the Redmine API | [logstash-output-redmine](https://github.com/logstash-plugins/logstash-output-redmine) |
|
||||
| [riak](/reference/plugins-outputs-riak.md) | Writes events to the Riak distributed key/value store | [logstash-output-riak](https://github.com/logstash-plugins/logstash-output-riak) |
|
||||
| [riemann](/reference/plugins-outputs-riemann.md) | Sends metrics to Riemann | [logstash-output-riemann](https://github.com/logstash-plugins/logstash-output-riemann) |
|
||||
| [s3](/reference/plugins-outputs-s3.md) | Sends Logstash events to the Amazon Simple Storage Service | [logstash-output-s3](https://github.com/logstash-plugins/logstash-output-s3) |
|
||||
| [sink](/reference/plugins-outputs-sink.md) | Discards any events received | [core plugin](https://github.com/elastic/logstash/blob/master/logstash-core/src/main/java/org/logstash/plugins/outputs/Sink.java) |
|
||||
| [sns](/reference/plugins-outputs-sns.md) | Sends events to Amazon’s Simple Notification Service | [logstash-output-sns](https://github.com/logstash-plugins/logstash-output-sns) |
|
||||
| [solr_http](/reference/plugins-outputs-solr_http.md) | Stores and indexes logs in Solr | [logstash-output-solr_http](https://github.com/logstash-plugins/logstash-output-solr_http) |
|
||||
| [sqs](/reference/plugins-outputs-sqs.md) | Pushes events to an Amazon Web Services Simple Queue Service queue | [logstash-output-sqs](https://github.com/logstash-plugins/logstash-output-sqs) |
|
||||
| [statsd](/reference/plugins-outputs-statsd.md) | Sends metrics using the `statsd` network daemon | [logstash-output-statsd](https://github.com/logstash-plugins/logstash-output-statsd) |
|
||||
| [stdout](/reference/plugins-outputs-stdout.md) | Prints events to the standard output | [logstash-output-stdout](https://github.com/logstash-plugins/logstash-output-stdout) |
|
||||
| [stomp](/reference/plugins-outputs-stomp.md) | Writes events using the STOMP protocol | [logstash-output-stomp](https://github.com/logstash-plugins/logstash-output-stomp) |
|
||||
| [syslog](/reference/plugins-outputs-syslog.md) | Sends events to a `syslog` server | [logstash-output-syslog](https://github.com/logstash-plugins/logstash-output-syslog) |
|
||||
| [tcp](/reference/plugins-outputs-tcp.md) | Writes events over a TCP socket | [logstash-output-tcp](https://github.com/logstash-plugins/logstash-output-tcp) |
|
||||
| [timber](/reference/plugins-outputs-timber.md) | Sends events to the Timber.io logging service | [logstash-output-timber](https://github.com/logstash-plugins/logstash-output-timber) |
|
||||
| [udp](/reference/plugins-outputs-udp.md) | Sends events over UDP | [logstash-output-udp](https://github.com/logstash-plugins/logstash-output-udp) |
|
||||
| [webhdfs](/reference/plugins-outputs-webhdfs.md) | Sends Logstash events to HDFS using the `webhdfs` REST API | [logstash-output-webhdfs](https://github.com/logstash-plugins/logstash-output-webhdfs) |
|
||||
| [websocket](/reference/plugins-outputs-websocket.md) | Publishes messages to a websocket | [logstash-output-websocket](https://github.com/logstash-plugins/logstash-output-websocket) |
|
||||
| [workplace_search (deprecated)](/reference/plugins-outputs-elastic_workplace_search.md) | [deprecated at {{stack}} version 9.0.0 and plugin version 3.0.1] Sends events to Elastic Workplace Search | [logstash-integration-elastic_enterprise_search](https://github.com/logstash-plugins/logstash-output-elastic_app_search) |
|
||||
| [xmpp](/reference/plugins-outputs-xmpp.md) | Posts events over XMPP | [logstash-output-xmpp](https://github.com/logstash-plugins/logstash-output-xmpp) |
|
||||
| [zabbix](/reference/plugins-outputs-zabbix.md) | Sends events to a Zabbix server | [logstash-output-zabbix](https://github.com/logstash-plugins/logstash-output-zabbix) |
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -334,7 +334,7 @@ queue.max_bytes: 8gb
|
|||
|
||||
With these settings specified, Logstash buffers events on disk until the size of the queue reaches 8gb. When the queue is full of unACKed events, and the size limit has been reached, Logstash no longer accepts new events.
|
||||
|
||||
Each input handles back pressure independently. For example, when the [beats](/reference/plugins-inputs-beats.md) input encounters back pressure, it no longer accepts new connections and waits until the persistent queue has space to accept more events. After the filter and output stages finish processing existing events in the queue and ACKs them, Logstash automatically starts accepting new events.
|
||||
Each input handles back pressure independently. For example, when the [beats](logstash-docs-md://lsr/plugins-inputs-beats.md) input encounters back pressure, it no longer accepts new connections and waits until the persistent queue has space to accept more events. After the filter and output stages finish processing existing events in the queue and ACKs them, Logstash automatically starts accepting new events.
|
||||
|
||||
|
||||
### Controlling durability [durability-persistent-queues]
|
||||
|
|
|
@ -13,9 +13,9 @@ List-type URI parameters will automatically expand strings that contain multiple
|
|||
|
||||
These plugins and options support this functionality:
|
||||
|
||||
* [Elasticsearch input plugin - `hosts`](/reference/plugins-inputs-elasticsearch.md#plugins-inputs-elasticsearch-hosts)
|
||||
* [Elasticsearch output plugin - `hosts`](/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-hosts)
|
||||
* [Elasticsearch filter plugin - `hosts`](/reference/plugins-filters-elasticsearch.md#plugins-filters-elasticsearch-hosts)
|
||||
* [Elasticsearch input plugin - `hosts`](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md#plugins-inputs-elasticsearch-hosts)
|
||||
* [Elasticsearch output plugin - `hosts`](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-hosts)
|
||||
* [Elasticsearch filter plugin - `hosts`](logstash-docs-md://lsr/plugins-filters-elasticsearch.md#plugins-filters-elasticsearch-hosts)
|
||||
|
||||
You can use this functionality to define an environment variable with multiple whitespace-delimited URIs and use it for the options above.
|
||||
|
||||
|
|
|
@ -1,148 +0,0 @@
|
|||
---
|
||||
navigation_title: "avro"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-avro.html
|
||||
---
|
||||
|
||||
# Avro codec plugin [plugins-codecs-avro]
|
||||
|
||||
|
||||
* Plugin version: v3.4.1
|
||||
* Released on: 2023-10-16
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-avro/blob/v3.4.1/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-avro-index.md).
|
||||
|
||||
## Getting help [_getting_help_172]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-avro). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_171]
|
||||
|
||||
Read serialized Avro records as Logstash events
|
||||
|
||||
This plugin is used to serialize Logstash events as Avro datums, as well as deserializing Avro datums into Logstash events.
|
||||
|
||||
|
||||
## Event Metadata and the Elastic Common Schema (ECS) [plugins-codecs-avro-ecs_metadata]
|
||||
|
||||
The plugin behaves the same regardless of ECS compatibility, except adding the original message to `[event][original]`.
|
||||
|
||||
|
||||
## Encoding [_encoding]
|
||||
|
||||
This codec is for serializing individual Logstash events as Avro datums that are Avro binary blobs. It does not encode Logstash events into an Avro file.
|
||||
|
||||
|
||||
## Decoding [_decoding]
|
||||
|
||||
This codec is for deserializing individual Avro records. It is not for reading Avro files. Avro files have a unique format that must be handled upon input.
|
||||
|
||||
::::{admonition} Partial deserialization
|
||||
:class: note
|
||||
|
||||
Avro format is known to support partial deserialization of arbitrary fields, providing a schema containing a subset of the schema which was used to serialize the data. This codec **doesn’t support partial deserialization of arbitrary fields**. Partial deserialization *might* work only when providing a schema which contains the first `N` fields of the schema used to serialize the data (and in the same order).
|
||||
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Usage [_usage_6]
|
||||
|
||||
Example usage with Kafka input.
|
||||
|
||||
```ruby
|
||||
input {
|
||||
kafka {
|
||||
codec => avro {
|
||||
schema_uri => "/tmp/schema.avsc"
|
||||
}
|
||||
}
|
||||
}
|
||||
filter {
|
||||
...
|
||||
}
|
||||
output {
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Avro Codec Configuration Options [plugins-codecs-avro-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`ecs_compatibility`](#plugins-codecs-avro-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`encoding`](#plugins-codecs-avro-encoding) | [string](/reference/configuration-file-structure.md#string), one of `["binary", "base64"]` | No |
|
||||
| [`schema_uri`](#plugins-codecs-avro-schema_uri) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`tag_on_failure`](#plugins-codecs-avro-tag_on_failure) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`target`](#plugins-codecs-avro-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-codecs-avro-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: Avro data added at root level
|
||||
* `v1`,`v8`: Elastic Common Schema compliant behavior (`[event][original]` is also added)
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)).
|
||||
|
||||
|
||||
### `encoding` [plugins-codecs-avro-encoding]
|
||||
|
||||
* Value can be any of: `binary`, `base64`
|
||||
* Default value is `base64`
|
||||
|
||||
Set encoding for Avro’s payload. Use `base64` (default) to indicate that this codec sends or expects to receive base64-encoded bytes.
|
||||
|
||||
Set this option to `binary` to indicate that this codec sends or expects to receive binary Avro data.
|
||||
|
||||
|
||||
### `schema_uri` [plugins-codecs-avro-schema_uri]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
schema path to fetch the schema from. This can be a *http* or *file* scheme URI example:
|
||||
|
||||
* http - `http://example.com/schema.avsc`
|
||||
* file - `/path/to/schema.avsc`
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-codecs-avro-tag_on_failure]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
tag events with `_avroparsefailure` when decode fails
|
||||
|
||||
|
||||
### `target` [plugins-codecs-avro-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
* This is only relevant when decode data into an event
|
||||
|
||||
Define the target field for placing the values. If this setting is not set, the Avro data will be stored at the root (top level) of the event.
|
||||
|
||||
**Example**
|
||||
|
||||
```ruby
|
||||
input {
|
||||
kafka {
|
||||
codec => avro {
|
||||
schema_uri => "/tmp/schema.avsc"
|
||||
target => "[document]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -1,524 +0,0 @@
|
|||
---
|
||||
navigation_title: "cef"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-cef.html
|
||||
---
|
||||
|
||||
# Cef codec plugin [plugins-codecs-cef]
|
||||
|
||||
|
||||
* Plugin version: v6.2.8
|
||||
* Released on: 2024-10-22
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-cef/blob/v6.2.8/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-cef-index.md).
|
||||
|
||||
## Getting help [_getting_help_173]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-cef). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_172]
|
||||
|
||||
Implementation of a Logstash codec for the ArcSight Common Event Format (CEF). It is based on [Implementing ArcSight CEF Revision 25, September 2017](https://www.microfocus.com/documentation/arcsight/arcsight-smartconnectors/pdfdoc/common-event-format-v25/common-event-format-v25.pdf).
|
||||
|
||||
If this codec receives a payload from an input that is not a valid CEF message, then it produces an event with the payload as the *message* field and a *_cefparsefailure* tag.
|
||||
|
||||
|
||||
## Compatibility with the Elastic Common Schema (ECS) [_compatibility_with_the_elastic_common_schema_ecs_3]
|
||||
|
||||
This plugin can be used to decode CEF events *into* the Elastic Common Schema, or to encode ECS-compatible events into CEF. It can also be used *without* ECS, encoding and decoding events using only CEF-defined field names and keys.
|
||||
|
||||
The ECS Compatibility mode for a specific plugin instance can be controlled by setting [`ecs_compatibility`](#plugins-codecs-cef-ecs_compatibility) when defining the codec:
|
||||
|
||||
```sh
|
||||
input {
|
||||
tcp {
|
||||
# ...
|
||||
codec => cef {
|
||||
ecs_compatibility => v1
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If left unspecified, the value of the `pipeline.ecs_compatibility` setting is used.
|
||||
|
||||
### Timestamps and ECS compatiblity [_timestamps_and_ecs_compatiblity]
|
||||
|
||||
When decoding in ECS Compatibility Mode, timestamp-type fields are parsed and normalized to specific points on the timeline.
|
||||
|
||||
Because the CEF format allows ambiguous timestamp formats, some reasonable assumptions are made:
|
||||
|
||||
* When the timestamp does not include a year, we assume it happened in the recent past (or *very* near future to accommodate out-of-sync clocks and timezone offsets).
|
||||
* When the timestamp does not include UTC-offset information, we use the event’s timezone (`dtz` or `deviceTimeZone` field), or fall through to this plugin’s [`default_timezone`](#plugins-codecs-cef-default_timezone).
|
||||
* Localized timestamps are parsed using the provided [`locale`](#plugins-codecs-cef-locale).
|
||||
|
||||
|
||||
### Field mapping [plugins-codecs-cef-field-mapping]
|
||||
|
||||
The header fields from each CEF payload is expanded to the following fields, depending on whether ECS is enabled.
|
||||
|
||||
#### Header field mapping [plugins-codecs-cef-header-field]
|
||||
|
||||
| ECS Disabled | ECS Field |
|
||||
| --- | --- |
|
||||
| `cefVersion` | `[cef][version]` |
|
||||
| `deviceVendor` | `[observer][vendor]` |
|
||||
| `deviceProduct` | `[observer][product]` |
|
||||
| `deviceVersion` | `[observer][version]` |
|
||||
| `deviceEventClassId` | `[event][code]` |
|
||||
| `name` | `[cef][name]` |
|
||||
| `severity` | `[event][severity]` |
|
||||
|
||||
When decoding CEF payloads with `ecs_compatibility => disabled`, the abbreviated CEF Keys found in extensions are expanded, and CEF Field Names are inserted at the root level of the event.
|
||||
|
||||
When decoding in an ECS Compatibility mode, the ECS Fields are populated from the corresponding CEF Field Names *or* CEF Keys found in the payload’s extensions.
|
||||
|
||||
The following is a mapping between these fields.
|
||||
|
||||
|
||||
#### Extension field mapping [plugins-codecs-cef-ext-field]
|
||||
|
||||
| CEF Field Name (optional CEF Key) | ECS Field |
|
||||
| --- | --- |
|
||||
| `agentAddress` (`agt`) | `[agent][ip]` |
|
||||
| `agentDnsDomain` | `[cef][agent][registered_domain]`<br> Multiple possible CEF fields map to this ECS Field. When decoding, the last entry encountered wins. When encoding, this field has *higher* priority. |
|
||||
| `agentHostName` (`ahost`) | `[agent][name]` |
|
||||
| `agentId` (`aid`) | `[agent][id]` |
|
||||
| `agentMacAddress` (`amac`) | `[agent][mac]` |
|
||||
| `agentNtDomain` | `[cef][agent][registered_domain]`<br> Multiple possible CEF fields map to this ECS Field. When decoding, the last entry encountered wins. When encoding, this field has *lower* priority. |
|
||||
| `agentReceiptTime` (`art`) | `[event][created]`<br> This field contains a timestamp. In ECS Compatibility Mode, it is parsed to a specific point in time. |
|
||||
| `agentTimeZone` (`atz`) | `[cef][agent][timezone]` |
|
||||
| `agentTranslatedAddress` | `[cef][agent][nat][ip]` |
|
||||
| `agentTranslatedZoneExternalID` | `[cef][agent][translated_zone][external_id]` |
|
||||
| `agentTranslatedZoneURI` | `[cef][agent][translated_zone][uri]` |
|
||||
| `agentType` (`at`) | `[agent][type]` |
|
||||
| `agentVersion` (`av`) | `[agent][version]` |
|
||||
| `agentZoneExternalID` | `[cef][agent][zone][external_id]` |
|
||||
| `agentZoneURI` | `[cef][agent][zone][uri]` |
|
||||
| `applicationProtocol` (`app`) | `[network][protocol]` |
|
||||
| `baseEventCount` (`cnt`) | `[cef][base_event_count]` |
|
||||
| `bytesIn` (`in`) | `[source][bytes]` |
|
||||
| `bytesOut` (`out`) | `[destination][bytes]` |
|
||||
| `categoryDeviceType` (`catdt`) | `[cef][device_type]` |
|
||||
| `customerExternalID` | `[organization][id]` |
|
||||
| `customerURI` | `[organization][name]` |
|
||||
| `destinationAddress` (`dst`) | `[destination][ip]` |
|
||||
| `destinationDnsDomain` | `[destination][registered_domain]`<br> Multiple possible CEF fields map to this ECS Field. When decoding, the last entry encountered wins. When encoding, this field has *higher* priority. |
|
||||
| `destinationGeoLatitude` (`dlat`) | `[destination][geo][location][lat]` |
|
||||
| `destinationGeoLongitude` (`dlong`) | `[destination][geo][location][lon]` |
|
||||
| `destinationHostName` (`dhost`) | `[destination][domain]` |
|
||||
| `destinationMacAddress` (`dmac`) | `[destination][mac]` |
|
||||
| `destinationNtDomain` (`dntdom`) | `[destination][registered_domain]`<br> Multiple possible CEF fields map to this ECS Field. When decoding, the last entry encountered wins. When encoding, this field has *lower* priority. |
|
||||
| `destinationPort` (`dpt`) | `[destination][port]` |
|
||||
| `destinationProcessId` (`dpid`) | `[destination][process][pid]` |
|
||||
| `destinationProcessName` (`dproc`) | `[destination][process][name]` |
|
||||
| `destinationServiceName` | `[destination][service][name]` |
|
||||
| `destinationTranslatedAddress` | `[destination][nat][ip]` |
|
||||
| `destinationTranslatedPort` | `[destination][nat][port]` |
|
||||
| `destinationTranslatedZoneExternalID` | `[cef][destination][translated_zone][external_id]` |
|
||||
| `destinationTranslatedZoneURI` | `[cef][destination][translated_zone][uri]` |
|
||||
| `destinationUserId` (`duid`) | `[destination][user][id]` |
|
||||
| `destinationUserName` (`duser`) | `[destination][user][name]` |
|
||||
| `destinationUserPrivileges` (`dpriv`) | `[destination][user][group][name]` |
|
||||
| `destinationZoneExternalID` | `[cef][destination][zone][external_id]` |
|
||||
| `destinationZoneURI` | `[cef][destination][zone][uri]` |
|
||||
| `deviceAction` (`act`) | `[event][action]` |
|
||||
| `deviceAddress` (`dvc`) | `[observer][ip]`<br> When plugin configured with `device => observer` |
|
||||
| `[host][ip]`<br> When plugin configured with `device => host` |
|
||||
| `deviceCustomFloatingPoint1` (`cfp1`) | `[cef][device_custom_floating_point_1][value]` |
|
||||
| `deviceCustomFloatingPoint1Label` (`cfp1Label`) | `[cef][device_custom_floating_point_1][label]` |
|
||||
| `deviceCustomFloatingPoint2` (`cfp2`) | `[cef][device_custom_floating_point_2][value]` |
|
||||
| `deviceCustomFloatingPoint2Label` (`cfp2Label`) | `[cef][device_custom_floating_point_2][label]` |
|
||||
| `deviceCustomFloatingPoint3` (`cfp3`) | `[cef][device_custom_floating_point_3][value]` |
|
||||
| `deviceCustomFloatingPoint3Label` (`cfp3Label`) | `[cef][device_custom_floating_point_3][label]` |
|
||||
| `deviceCustomFloatingPoint4` (`cfp4`) | `[cef][device_custom_floating_point_4][value]` |
|
||||
| `deviceCustomFloatingPoint4Label` (`cfp4Label`) | `[cef][device_custom_floating_point_4][label]` |
|
||||
| `deviceCustomFloatingPoint5` (`cfp5`) | `[cef][device_custom_floating_point_5][value]` |
|
||||
| `deviceCustomFloatingPoint5Label` (`cfp5Label`) | `[cef][device_custom_floating_point_5][label]` |
|
||||
| `deviceCustomFloatingPoint6` (`cfp6`) | `[cef][device_custom_floating_point_6][value]` |
|
||||
| `deviceCustomFloatingPoint6Label` (`cfp6Label`) | `[cef][device_custom_floating_point_6][label]` |
|
||||
| `deviceCustomFloatingPoint7` (`cfp7`) | `[cef][device_custom_floating_point_7][value]` |
|
||||
| `deviceCustomFloatingPoint7Label` (`cfp7Label`) | `[cef][device_custom_floating_point_7][label]` |
|
||||
| `deviceCustomFloatingPoint8` (`cfp8`) | `[cef][device_custom_floating_point_8][value]` |
|
||||
| `deviceCustomFloatingPoint8Label` (`cfp8Label`) | `[cef][device_custom_floating_point_8][label]` |
|
||||
| `deviceCustomFloatingPoint9` (`cfp9`) | `[cef][device_custom_floating_point_9][value]` |
|
||||
| `deviceCustomFloatingPoint9Label` (`cfp9Label`) | `[cef][device_custom_floating_point_9][label]` |
|
||||
| `deviceCustomFloatingPoint10` (`cfp10`) | `[cef][device_custom_floating_point_10][value]` |
|
||||
| `deviceCustomFloatingPoint10Label` (`cfp10Label`) | `[cef][device_custom_floating_point_10][label]` |
|
||||
| `deviceCustomFloatingPoint11` (`cfp11`) | `[cef][device_custom_floating_point_11][value]` |
|
||||
| `deviceCustomFloatingPoint11Label` (`cfp11Label`) | `[cef][device_custom_floating_point_11][label]` |
|
||||
| `deviceCustomFloatingPoint12` (`cfp12`) | `[cef][device_custom_floating_point_12][value]` |
|
||||
| `deviceCustomFloatingPoint12Label` (`cfp12Label`) | `[cef][device_custom_floating_point_12][label]` |
|
||||
| `deviceCustomFloatingPoint13` (`cfp13`) | `[cef][device_custom_floating_point_13][value]` |
|
||||
| `deviceCustomFloatingPoint13Label` (`cfp13Label`) | `[cef][device_custom_floating_point_13][label]` |
|
||||
| `deviceCustomFloatingPoint14` (`cfp14`) | `[cef][device_custom_floating_point_14][value]` |
|
||||
| `deviceCustomFloatingPoint14Label` (`cfp14Label`) | `[cef][device_custom_floating_point_14][label]` |
|
||||
| `deviceCustomFloatingPoint15` (`cfp15`) | `[cef][device_custom_floating_point_15][value]` |
|
||||
| `deviceCustomFloatingPoint15Label` (`cfp15Label`) | `[cef][device_custom_floating_point_15][label]` |
|
||||
| `deviceCustomIPv6Address1` (`c6a1`) | `[cef][device_custom_ipv6_address_1][value]` |
|
||||
| `deviceCustomIPv6Address1Label` (`c6a1Label`) | `[cef][device_custom_ipv6_address_1][label]` |
|
||||
| `deviceCustomIPv6Address2` (`c6a2`) | `[cef][device_custom_ipv6_address_2][value]` |
|
||||
| `deviceCustomIPv6Address2Label` (`c6a2Label`) | `[cef][device_custom_ipv6_address_2][label]` |
|
||||
| `deviceCustomIPv6Address3` (`c6a3`) | `[cef][device_custom_ipv6_address_3][value]` |
|
||||
| `deviceCustomIPv6Address3Label` (`c6a3Label`) | `[cef][device_custom_ipv6_address_3][label]` |
|
||||
| `deviceCustomIPv6Address4` (`c6a4`) | `[cef][device_custom_ipv6_address_4][value]` |
|
||||
| `deviceCustomIPv6Address4Label` (`c6a4Label`) | `[cef][device_custom_ipv6_address_4][label]` |
|
||||
| `deviceCustomIPv6Address5` (`c6a5`) | `[cef][device_custom_ipv6_address_5][value]` |
|
||||
| `deviceCustomIPv6Address5Label` (`c6a5Label`) | `[cef][device_custom_ipv6_address_5][label]` |
|
||||
| `deviceCustomIPv6Address6` (`c6a6`) | `[cef][device_custom_ipv6_address_6][value]` |
|
||||
| `deviceCustomIPv6Address6Label` (`c6a6Label`) | `[cef][device_custom_ipv6_address_6][label]` |
|
||||
| `deviceCustomIPv6Address7` (`c6a7`) | `[cef][device_custom_ipv6_address_7][value]` |
|
||||
| `deviceCustomIPv6Address7Label` (`c6a7Label`) | `[cef][device_custom_ipv6_address_7][label]` |
|
||||
| `deviceCustomIPv6Address8` (`c6a8`) | `[cef][device_custom_ipv6_address_8][value]` |
|
||||
| `deviceCustomIPv6Address8Label` (`c6a8Label`) | `[cef][device_custom_ipv6_address_8][label]` |
|
||||
| `deviceCustomIPv6Address9` (`c6a9`) | `[cef][device_custom_ipv6_address_9][value]` |
|
||||
| `deviceCustomIPv6Address9Label` (`c6a9Label`) | `[cef][device_custom_ipv6_address_9][label]` |
|
||||
| `deviceCustomIPv6Address10` (`c6a10`) | `[cef][device_custom_ipv6_address_10][value]` |
|
||||
| `deviceCustomIPv6Address10Label` (`c6a10Label`) | `[cef][device_custom_ipv6_address_10][label]` |
|
||||
| `deviceCustomIPv6Address11` (`c6a11`) | `[cef][device_custom_ipv6_address_11][value]` |
|
||||
| `deviceCustomIPv6Address11Label` (`c6a11Label`) | `[cef][device_custom_ipv6_address_11][label]` |
|
||||
| `deviceCustomIPv6Address12` (`c6a12`) | `[cef][device_custom_ipv6_address_12][value]` |
|
||||
| `deviceCustomIPv6Address12Label` (`c6a12Label`) | `[cef][device_custom_ipv6_address_12][label]` |
|
||||
| `deviceCustomIPv6Address13` (`c6a13`) | `[cef][device_custom_ipv6_address_13][value]` |
|
||||
| `deviceCustomIPv6Address13Label` (`c6a13Label`) | `[cef][device_custom_ipv6_address_13][label]` |
|
||||
| `deviceCustomIPv6Address14` (`c6a14`) | `[cef][device_custom_ipv6_address_14][value]` |
|
||||
| `deviceCustomIPv6Address14Label` (`c6a14Label`) | `[cef][device_custom_ipv6_address_14][label]` |
|
||||
| `deviceCustomIPv6Address15` (`c6a15`) | `[cef][device_custom_ipv6_address_15][value]` |
|
||||
| `deviceCustomIPv6Address15Label` (`c6a15Label`) | `[cef][device_custom_ipv6_address_15][label]` |
|
||||
| `deviceCustomNumber1` (`cn1`) | `[cef][device_custom_number_1][value]` |
|
||||
| `deviceCustomNumber1Label` (`cn1Label`) | `[cef][device_custom_number_1][label]` |
|
||||
| `deviceCustomNumber2` (`cn2`) | `[cef][device_custom_number_2][value]` |
|
||||
| `deviceCustomNumber2Label` (`cn2Label`) | `[cef][device_custom_number_2][label]` |
|
||||
| `deviceCustomNumber3` (`cn3`) | `[cef][device_custom_number_3][value]` |
|
||||
| `deviceCustomNumber3Label` (`cn3Label`) | `[cef][device_custom_number_3][label]` |
|
||||
| `deviceCustomNumber4` (`cn4`) | `[cef][device_custom_number_4][value]` |
|
||||
| `deviceCustomNumber4Label` (`cn4Label`) | `[cef][device_custom_number_4][label]` |
|
||||
| `deviceCustomNumber5` (`cn5`) | `[cef][device_custom_number_5][value]` |
|
||||
| `deviceCustomNumber5Label` (`cn5Label`) | `[cef][device_custom_number_5][label]` |
|
||||
| `deviceCustomNumber6` (`cn6`) | `[cef][device_custom_number_6][value]` |
|
||||
| `deviceCustomNumber6Label` (`cn6Label`) | `[cef][device_custom_number_6][label]` |
|
||||
| `deviceCustomNumber7` (`cn7`) | `[cef][device_custom_number_7][value]` |
|
||||
| `deviceCustomNumber7Label` (`cn7Label`) | `[cef][device_custom_number_7][label]` |
|
||||
| `deviceCustomNumber8` (`cn8`) | `[cef][device_custom_number_8][value]` |
|
||||
| `deviceCustomNumber8Label` (`cn8Label`) | `[cef][device_custom_number_8][label]` |
|
||||
| `deviceCustomNumber9` (`cn9`) | `[cef][device_custom_number_9][value]` |
|
||||
| `deviceCustomNumber9Label` (`cn9Label`) | `[cef][device_custom_number_9][label]` |
|
||||
| `deviceCustomNumber10` (`cn10`) | `[cef][device_custom_number_10][value]` |
|
||||
| `deviceCustomNumber10Label` (`cn10Label`) | `[cef][device_custom_number_10][label]` |
|
||||
| `deviceCustomNumber11` (`cn11`) | `[cef][device_custom_number_11][value]` |
|
||||
| `deviceCustomNumber11Label` (`cn11Label`) | `[cef][device_custom_number_11][label]` |
|
||||
| `deviceCustomNumber12` (`cn12`) | `[cef][device_custom_number_12][value]` |
|
||||
| `deviceCustomNumber12Label` (`cn12Label`) | `[cef][device_custom_number_12][label]` |
|
||||
| `deviceCustomNumber13` (`cn13`) | `[cef][device_custom_number_13][value]` |
|
||||
| `deviceCustomNumber13Label` (`cn13Label`) | `[cef][device_custom_number_13][label]` |
|
||||
| `deviceCustomNumber14` (`cn14`) | `[cef][device_custom_number_14][value]` |
|
||||
| `deviceCustomNumber14Label` (`cn14Label`) | `[cef][device_custom_number_14][label]` |
|
||||
| `deviceCustomNumber15` (`cn15`) | `[cef][device_custom_number_15][value]` |
|
||||
| `deviceCustomNumber15Label` (`cn15Label`) | `[cef][device_custom_number_15][label]` |
|
||||
| `deviceCustomString1` (`cs1`) | `[cef][device_custom_string_1][value]` |
|
||||
| `deviceCustomString1Label` (`cs1Label`) | `[cef][device_custom_string_1][label]` |
|
||||
| `deviceCustomString2` (`cs2`) | `[cef][device_custom_string_2][value]` |
|
||||
| `deviceCustomString2Label` (`cs2Label`) | `[cef][device_custom_string_2][label]` |
|
||||
| `deviceCustomString3` (`cs3`) | `[cef][device_custom_string_3][value]` |
|
||||
| `deviceCustomString3Label` (`cs3Label`) | `[cef][device_custom_string_3][label]` |
|
||||
| `deviceCustomString4` (`cs4`) | `[cef][device_custom_string_4][value]` |
|
||||
| `deviceCustomString4Label` (`cs4Label`) | `[cef][device_custom_string_4][label]` |
|
||||
| `deviceCustomString5` (`cs5`) | `[cef][device_custom_string_5][value]` |
|
||||
| `deviceCustomString5Label` (`cs5Label`) | `[cef][device_custom_string_5][label]` |
|
||||
| `deviceCustomString6` (`cs6`) | `[cef][device_custom_string_6][value]` |
|
||||
| `deviceCustomString6Label` (`cs6Label`) | `[cef][device_custom_string_6][label]` |
|
||||
| `deviceCustomString7` (`cs7`) | `[cef][device_custom_string_7][value]` |
|
||||
| `deviceCustomString7Label` (`cs7Label`) | `[cef][device_custom_string_7][label]` |
|
||||
| `deviceCustomString8` (`cs8`) | `[cef][device_custom_string_8][value]` |
|
||||
| `deviceCustomString8Label` (`cs8Label`) | `[cef][device_custom_string_8][label]` |
|
||||
| `deviceCustomString9` (`cs9`) | `[cef][device_custom_string_9][value]` |
|
||||
| `deviceCustomString9Label` (`cs9Label`) | `[cef][device_custom_string_9][label]` |
|
||||
| `deviceCustomString10` (`cs10`) | `[cef][device_custom_string_10][value]` |
|
||||
| `deviceCustomString10Label` (`cs10Label`) | `[cef][device_custom_string_10][label]` |
|
||||
| `deviceCustomString11` (`cs11`) | `[cef][device_custom_string_11][value]` |
|
||||
| `deviceCustomString11Label` (`cs11Label`) | `[cef][device_custom_string_11][label]` |
|
||||
| `deviceCustomString12` (`cs12`) | `[cef][device_custom_string_12][value]` |
|
||||
| `deviceCustomString12Label` (`cs12Label`) | `[cef][device_custom_string_12][label]` |
|
||||
| `deviceCustomString13` (`cs13`) | `[cef][device_custom_string_13][value]` |
|
||||
| `deviceCustomString13Label` (`cs13Label`) | `[cef][device_custom_string_13][label]` |
|
||||
| `deviceCustomString14` (`cs14`) | `[cef][device_custom_string_14][value]` |
|
||||
| `deviceCustomString14Label` (`cs14Label`) | `[cef][device_custom_string_14][label]` |
|
||||
| `deviceCustomString15` (`cs15`) | `[cef][device_custom_string_15][value]` |
|
||||
| `deviceCustomString15Label` (`cs15Label`) | `[cef][device_custom_string_15][label]` |
|
||||
| `deviceDirection` | `[network][direction]` |
|
||||
| `deviceDnsDomain` | `[observer][registered_domain]`<br> When plugin configured with `device => observer`. |
|
||||
| `[host][registered_domain]`<br> When plugin configured with `device => host`. |
|
||||
| `deviceEventCategory` (`cat`) | `[cef][category]` |
|
||||
| `deviceExternalId` | `[observer][name]`<br> When plugin configured with `device => observer`. |
|
||||
| `[host][id]`<br> When plugin configured with `device => host`. |
|
||||
| `deviceFacility` | `[log][syslog][facility][code]` |
|
||||
| `deviceHostName` (`dvchost`) | `[observer][hostname]`<br> When plugin configured with `device => observer`. |
|
||||
| `[host][name]`<br> When plugin configured with `device => host`. |
|
||||
| `deviceInboundInterface` | `[observer][ingress][interface][name]` |
|
||||
| `deviceMacAddress` (`dvcmac`) | `[observer][mac]`<br> When plugin configured with `device => observer`. |
|
||||
| `[host][mac]`<br> When plugin configured with `device => host`. |
|
||||
| `deviceNtDomain` | `[cef][nt_domain]` |
|
||||
| `deviceOutboundInterface` | `[observer][egress][interface][name]` |
|
||||
| `devicePayloadId` | `[cef][payload_id]` |
|
||||
| `deviceProcessId` (`dvcpid`) | `[process][pid]` |
|
||||
| `deviceProcessName` | `[process][name]` |
|
||||
| `deviceReceiptTime` (`rt`) | `@timestamp`<br> This field contains a timestamp. In ECS Compatibility Mode, it is parsed to a specific point in time. |
|
||||
| `deviceTimeZone` (`dtz`) | `[event][timezone]` |
|
||||
| `deviceTranslatedAddress` | `[host][nat][ip]` |
|
||||
| `deviceTranslatedZoneExternalID` | `[cef][translated_zone][external_id]` |
|
||||
| `deviceTranslatedZoneURI` | `[cef][translated_zone][uri]` |
|
||||
| `deviceVersion` | `[observer][version]` |
|
||||
| `deviceZoneExternalID` | `[cef][zone][external_id]` |
|
||||
| `deviceZoneURI` | `[cef][zone][uri]` |
|
||||
| `endTime` (`end`) | `[event][end]`<br> This field contains a timestamp. In ECS Compatibility Mode, it is parsed to a specific point in time. |
|
||||
| `eventId` | `[event][id]` |
|
||||
| `eventOutcome` (`outcome`) | `[event][outcome]` |
|
||||
| `externalId` | `[cef][external_id]` |
|
||||
| `fileCreateTime` | `[file][created]` |
|
||||
| `fileHash` | `[file][hash]` |
|
||||
| `fileId` | `[file][inode]` |
|
||||
| `fileModificationTime` | `[file][mtime]`<br> This field contains a timestamp. In ECS Compatibility Mode, it is parsed to a specific point in time. |
|
||||
| `fileName` (`fname`) | `[file][name]` |
|
||||
| `filePath` | `[file][path]` |
|
||||
| `filePermission` | `[file][group]` |
|
||||
| `fileSize` (`fsize`) | `[file][size]` |
|
||||
| `fileType` | `[file][extension]` |
|
||||
| `managerReceiptTime` (`mrt`) | `[event][ingested]`<br> This field contains a timestamp. In ECS Compatibility Mode, it is parsed to a specific point in time. |
|
||||
| `message` (`msg`) | `[message]` |
|
||||
| `oldFileCreateTime` | `[cef][old_file][created]`<br> This field contains a timestamp. In ECS Compatibility Mode, it is parsed to a specific point in time. |
|
||||
| `oldFileHash` | `[cef][old_file][hash]` |
|
||||
| `oldFileId` | `[cef][old_file][inode]` |
|
||||
| `oldFileModificationTime` | `[cef][old_file][mtime]`<br> This field contains a timestamp. In ECS Compatibility Mode, it is parsed to a specific point in time. |
|
||||
| `oldFileName` | `[cef][old_file][name]` |
|
||||
| `oldFilePath` | `[cef][old_file][path]` |
|
||||
| `oldFilePermission` | `[cef][old_file][group]` |
|
||||
| `oldFileSize` | `[cef][old_file][size]` |
|
||||
| `oldFileType` | `[cef][old_file][extension]` |
|
||||
| `rawEvent` | `[event][original]` |
|
||||
| `Reason` (`reason`) | `[event][reason]` |
|
||||
| `requestClientApplication` | `[user_agent][original]` |
|
||||
| `requestContext` | `[http][request][referrer]` |
|
||||
| `requestCookies` | `[cef][request][cookies]` |
|
||||
| `requestMethod` | `[http][request][method]` |
|
||||
| `requestUrl` (`request`) | `[url][original]` |
|
||||
| `sourceAddress` (`src`) | `[source][ip]` |
|
||||
| `sourceDnsDomain` | `[source][registered_domain]`<br> Multiple possible CEF fields map to this ECS Field. When decoding, the last entry encountered wins. When encoding, this field has *higher* priority. |
|
||||
| `sourceGeoLatitude` (`slat`) | `[source][geo][location][lat]` |
|
||||
| `sourceGeoLongitude` (`slong`) | `[source][geo][location][lon]` |
|
||||
| `sourceHostName` (`shost`) | `[source][domain]` |
|
||||
| `sourceMacAddress` (`smac`) | `[source][mac]` |
|
||||
| `sourceNtDomain` (`sntdom`) | `[source][registered_domain]`<br> Multiple possible CEF fields map to this ECS Field. When decoding, the last entry encountered wins. When encoding, this field has *lower* priority. |
|
||||
| `sourcePort` (`spt`) | `[source][port]` |
|
||||
| `sourceProcessId` (`spid`) | `[source][process][pid]` |
|
||||
| `sourceProcessName` (`sproc`) | `[source][process][name]` |
|
||||
| `sourceServiceName` | `[source][service][name]` |
|
||||
| `sourceTranslatedAddress` | `[source][nat][ip]` |
|
||||
| `sourceTranslatedPort` | `[source][nat][port]` |
|
||||
| `sourceTranslatedZoneExternalID` | `[cef][source][translated_zone][external_id]` |
|
||||
| `sourceTranslatedZoneURI` | `[cef][source][translated_zone][uri]` |
|
||||
| `sourceUserId` (`suid`) | `[source][user][id]` |
|
||||
| `sourceUserName` (`suser`) | `[source][user][name]` |
|
||||
| `sourceUserPrivileges` (`spriv`) | `[source][user][group][name]` |
|
||||
| `sourceZoneExternalID` | `[cef][source][zone][external_id]` |
|
||||
| `sourceZoneURI` | `[cef][source][zone][uri]` |
|
||||
| `startTime` (`start`) | `[event][start]`<br> This field contains a timestamp. In ECS Compatibility Mode, it is parsed to a specific point in time. |
|
||||
| `transportProtocol` (`proto`) | `[network][transport]` |
|
||||
| `type` | `[cef][type]` |
|
||||
|
||||
|
||||
|
||||
|
||||
## Cef Codec Configuration Options [plugins-codecs-cef-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`default_timezone`](#plugins-codecs-cef-default_timezone) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`delimiter`](#plugins-codecs-cef-delimiter) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`device`](#plugins-codecs-cef-device) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ecs_compatibility`](#plugins-codecs-cef-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`fields`](#plugins-codecs-cef-fields) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`locale`](#plugins-codecs-cef-locale) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`name`](#plugins-codecs-cef-name) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`product`](#plugins-codecs-cef-product) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`raw_data_field`](#plugins-codecs-cef-raw_data_field) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`reverse_mapping`](#plugins-codecs-cef-reverse_mapping) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`severity`](#plugins-codecs-cef-severity) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`signature`](#plugins-codecs-cef-signature) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`vendor`](#plugins-codecs-cef-vendor) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`version`](#plugins-codecs-cef-version) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `default_timezone` [plugins-codecs-cef-default_timezone]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* [Timezone names](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) (such as `Europe/Moscow`, `America/Argentina/Buenos_Aires`)
|
||||
* UTC Offsets (such as `-08:00`, `+03:00`)
|
||||
|
||||
* The default value is your system time zone
|
||||
* This option has no effect when *encoding*.
|
||||
|
||||
When parsing timestamp fields in ECS mode and encountering timestamps that do not contain UTC-offset information, the `deviceTimeZone` (`dtz`) field from the CEF payload is used to interpret the given time. If the event does not include timezone information, this `default_timezone` is used instead.
|
||||
|
||||
|
||||
### `delimiter` [plugins-codecs-cef-delimiter]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
If your input puts a delimiter between each CEF event, you’ll want to set this to be that delimiter.
|
||||
|
||||
::::{note}
|
||||
Byte stream inputs such as TCP require delimiter to be specified. Otherwise input can be truncated or incorrectly split.
|
||||
::::
|
||||
|
||||
|
||||
**Example**
|
||||
|
||||
```ruby
|
||||
input {
|
||||
tcp {
|
||||
codec => cef { delimiter => "\r\n" }
|
||||
# ...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This setting allows the following character sequences to have special meaning:
|
||||
|
||||
* `\\r` (backslash "r") - means carriage return (ASCII 0x0D)
|
||||
* `\\n` (backslash "n") - means newline (ASCII 0x0A)
|
||||
|
||||
|
||||
### `device` [plugins-codecs-cef-device]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `observer`: indicates that device-specific fields represent the device used to *observe* the event.
|
||||
* `host`: indicates that device-specific fields represent the device on which the event *occurred*.
|
||||
|
||||
* The default value for this setting is `observer`.
|
||||
* Option has no effect when [`ecs_compatibility => disabled`](#plugins-codecs-cef-ecs_compatibility).
|
||||
* Option has no effect when *encoding*
|
||||
|
||||
Defines a set of device-specific CEF fields as either representing the device on which an event *occurred*, or merely the device from which the event was *observed*. This causes the relevant fields to be routed to either the `host` or the `observer` top-level groupings.
|
||||
|
||||
If the codec handles data from a variety of sources, the ECS recommendation is to use `observer`.
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-codecs-cef-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: uses CEF-defined field names in the event (e.g., `bytesIn`, `sourceAddress`)
|
||||
* `v1`: supports ECS-compatible event fields (e.g., `[source][bytes]`, `[source][ip]`)
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`.
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)).
|
||||
|
||||
|
||||
### `fields` [plugins-codecs-cef-fields]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
* Option has no effect when *decoding*
|
||||
|
||||
When this codec is used in an Output Plugin, a list of fields can be provided to be included in CEF extensions part as key/value pairs.
|
||||
|
||||
|
||||
### `locale` [plugins-codecs-cef-locale]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* Abbreviated language_COUNTRY format (e.g., `en_GB`, `pt_BR`)
|
||||
* Valid [IETF BCP 47](https://tools.ietf.org/html/bcp47) language tag (e.g., `zh-cmn-Hans-CN`)
|
||||
|
||||
* The default value is your system locale
|
||||
* Option has no effect when *encoding*
|
||||
|
||||
When parsing timestamp fields in ECS mode and encountering timestamps in a localized format, this `locale` is used to interpret locale-specific strings such as month abbreviations.
|
||||
|
||||
|
||||
### `name` [plugins-codecs-cef-name]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"Logstash"`
|
||||
* Option has no effect when *decoding*
|
||||
|
||||
When this codec is used in an Output Plugin, this option can be used to specify the value of the name field in the CEF header. The new value can include `%{{foo}}` strings to help you build a new value from other parts of the event.
|
||||
|
||||
|
||||
### `product` [plugins-codecs-cef-product]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"Logstash"`
|
||||
* Option has no effect when *decoding*
|
||||
|
||||
When this codec is used in an Output Plugin, this option can be used to specify the value of the device product field in CEF header. The new value can include `%{{foo}}` strings to help you build a new value from other parts of the event.
|
||||
|
||||
|
||||
### `raw_data_field` [plugins-codecs-cef-raw_data_field]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting
|
||||
|
||||
Store the raw data to the field, for example `[event][original]`. Existing target field will be overriden.
|
||||
|
||||
|
||||
### `reverse_mapping` [plugins-codecs-cef-reverse_mapping]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
* Option has no effect when *decoding*
|
||||
|
||||
Set to true to adhere to the specifications and encode using the CEF key name (short name) for the CEF field names.
|
||||
|
||||
|
||||
### `severity` [plugins-codecs-cef-severity]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"6"`
|
||||
* Option has no effect when *decoding*
|
||||
|
||||
When this codec is used in an Output Plugin, this option can be used to specify the value of the severity field in CEF header. The new value can include `%{{foo}}` strings to help you build a new value from other parts of the event.
|
||||
|
||||
Defined as field of type string to allow sprintf. The value will be validated to be an integer in the range from 0 to 10 (including). All invalid values will be mapped to the default of 6.
|
||||
|
||||
|
||||
### `signature` [plugins-codecs-cef-signature]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"Logstash"`
|
||||
* Option has no effect when *decoding*
|
||||
|
||||
When this codec is used in an Output Plugin, this option can be used to specify the value of the signature ID field in CEF header. The new value can include `%{{foo}}` strings to help you build a new value from other parts of the event.
|
||||
|
||||
|
||||
### `vendor` [plugins-codecs-cef-vendor]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"Elasticsearch"`
|
||||
* Option has no effect when *decoding*
|
||||
|
||||
When this codec is used in an Output Plugin, this option can be used to specify the value of the device vendor field in CEF header. The new value can include `%{{foo}}` strings to help you build a new value from other parts of the event.
|
||||
|
||||
|
||||
### `version` [plugins-codecs-cef-version]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"1.0"`
|
||||
* Option has no effect when *decoding*
|
||||
|
||||
When this codec is used in an Output Plugin, this option can be used to specify the value of the device version field in CEF header. The new value can include `%{{foo}}` strings to help you build a new value from other parts of the event.
|
|
@ -1,47 +0,0 @@
|
|||
---
|
||||
navigation_title: "cloudfront"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-cloudfront.html
|
||||
---
|
||||
|
||||
# Cloudfront codec plugin [plugins-codecs-cloudfront]
|
||||
|
||||
|
||||
* A component of the [aws integration plugin](/reference/plugins-integrations-aws.md)
|
||||
* Integration version: v7.1.8
|
||||
* Released on: 2024-07-26
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-integration-aws/blob/v7.1.8/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-cloudfront-index.md).
|
||||
|
||||
## Getting help [_getting_help_174]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-integration-aws). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_173]
|
||||
|
||||
This codec will read cloudfront encoded content
|
||||
|
||||
|
||||
## Cloudfront Codec Configuration Options [plugins-codecs-cloudfront-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`charset`](#plugins-codecs-cloudfront-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-cloudfront-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
The character encoding used in this codec. Examples include "UTF-8" and "CP1252"
|
||||
|
||||
JSON requires valid UTF-8 strings, but in some cases, software that emits JSON does so in another encoding (nxlog, for example). In weird cases like this, you can set the charset setting to the actual encoding of the text and logstash will convert it for you.
|
||||
|
||||
For nxlog users, you’ll want to set this to "CP1252"
|
||||
|
||||
|
||||
|
|
@ -1,41 +0,0 @@
|
|||
---
|
||||
navigation_title: "cloudtrail"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-cloudtrail.html
|
||||
---
|
||||
|
||||
# Cloudtrail codec plugin [plugins-codecs-cloudtrail]
|
||||
|
||||
|
||||
* A component of the [aws integration plugin](/reference/plugins-integrations-aws.md)
|
||||
* Integration version: v7.1.8
|
||||
* Released on: 2024-07-26
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-integration-aws/blob/v7.1.8/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-cloudtrail-index.md).
|
||||
|
||||
## Getting help [_getting_help_175]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-integration-aws). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_174]
|
||||
|
||||
This is the base class for logstash codecs.
|
||||
|
||||
|
||||
## Cloudtrail Codec Configuration Options [plugins-codecs-cloudtrail-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`charset`](#plugins-codecs-cloudtrail-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-cloudtrail-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
|
||||
|
|
@ -1,153 +0,0 @@
|
|||
---
|
||||
navigation_title: "collectd"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-collectd.html
|
||||
---
|
||||
|
||||
# Collectd codec plugin [plugins-codecs-collectd]
|
||||
|
||||
|
||||
* Plugin version: v3.1.0
|
||||
* Released on: 2021-08-04
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-collectd/blob/v3.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-collectd-index.md).
|
||||
|
||||
## Getting help [_getting_help_176]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-collectd). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_175]
|
||||
|
||||
Read events from the collectd binary protocol over the network via udp. See [https://collectd.org/wiki/index.php/Binary_protocol](https://collectd.org/wiki/index.php/Binary_protocol)
|
||||
|
||||
Configuration in your Logstash configuration file can be as simple as:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
udp {
|
||||
port => 25826
|
||||
buffer_size => 1452
|
||||
codec => collectd { }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
A sample `collectd.conf` to send to Logstash might be:
|
||||
|
||||
```xml
|
||||
Hostname "host.example.com"
|
||||
LoadPlugin interface
|
||||
LoadPlugin load
|
||||
LoadPlugin memory
|
||||
LoadPlugin network
|
||||
<Plugin interface>
|
||||
Interface "eth0"
|
||||
IgnoreSelected false
|
||||
</Plugin>
|
||||
<Plugin network>
|
||||
Server "10.0.0.1" "25826"
|
||||
</Plugin>
|
||||
```
|
||||
|
||||
Be sure to replace `10.0.0.1` with the IP of your Logstash instance.
|
||||
|
||||
|
||||
## Collectd Codec configuration options [plugins-codecs-collectd-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`authfile`](#plugins-codecs-collectd-authfile) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`nan_handling`](#plugins-codecs-collectd-nan_handling) | [string](/reference/configuration-file-structure.md#string), one of `["change_value", "warn", "drop"]` | No |
|
||||
| [`nan_tag`](#plugins-codecs-collectd-nan_tag) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`nan_value`](#plugins-codecs-collectd-nan_value) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`prune_intervals`](#plugins-codecs-collectd-prune_intervals) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`security_level`](#plugins-codecs-collectd-security_level) | [string](/reference/configuration-file-structure.md#string), one of `["None", "Sign", "Encrypt"]` | No |
|
||||
| [`target`](#plugins-codecs-collectd-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`typesdb`](#plugins-codecs-collectd-typesdb) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
|
||||
|
||||
### `authfile` [plugins-codecs-collectd-authfile]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Path to the authentication file. This file should have the same format as the [AuthFile](http://collectd.org/documentation/manpages/collectd.conf.5.shtml#authfile_filename) in collectd. You only need to set this option if the `security_level` is set to `Sign` or `Encrypt`
|
||||
|
||||
|
||||
### `nan_handling` [plugins-codecs-collectd-nan_handling]
|
||||
|
||||
* Value can be any of: `change_value`, `warn`, `drop`
|
||||
* Default value is `"change_value"`
|
||||
|
||||
What to do when a value in the event is `NaN` (Not a Number)
|
||||
|
||||
* change_value (default): Change the `NaN` to the value of the nan_value option and add `nan_tag` as a tag
|
||||
* warn: Change the `NaN` to the value of the nan_value option, print a warning to the log and add `nan_tag` as a tag
|
||||
* drop: Drop the event containing the `NaN` (this only drops the single event, not the whole packet)
|
||||
|
||||
|
||||
### `nan_tag` [plugins-codecs-collectd-nan_tag]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"_collectdNaN"`
|
||||
|
||||
The tag to add to the event if a `NaN` value was found Set this to an empty string ('') if you don’t want to tag
|
||||
|
||||
|
||||
### `nan_value` [plugins-codecs-collectd-nan_value]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `0`
|
||||
|
||||
Only relevant when `nan_handeling` is set to `change_value` Change NaN to this configured value
|
||||
|
||||
|
||||
### `prune_intervals` [plugins-codecs-collectd-prune_intervals]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Prune interval records. Defaults to `true`.
|
||||
|
||||
|
||||
### `security_level` [plugins-codecs-collectd-security_level]
|
||||
|
||||
* Value can be any of: `None`, `Sign`, `Encrypt`
|
||||
* Default value is `"None"`
|
||||
|
||||
Security Level. Default is `None`. This setting mirrors the setting from the collectd [Network plugin](https://collectd.org/wiki/index.php/Plugin:Network)
|
||||
|
||||
|
||||
### `target` [plugins-codecs-collectd-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define the target field for placing the decoded values. If this setting is not set, data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want data to be put under the `document` field:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
udp {
|
||||
port => 12345
|
||||
codec => collectd {
|
||||
target => "[document]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `typesdb` [plugins-codecs-collectd-typesdb]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
File path(s) to collectd `types.db` to use. The last matching pattern wins if you have identical pattern names in multiple files. If no types.db is provided the included `types.db` will be used (currently 5.4.0).
|
||||
|
||||
|
||||
|
|
@ -1,178 +0,0 @@
|
|||
---
|
||||
navigation_title: "csv"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-csv.html
|
||||
---
|
||||
|
||||
# Csv codec plugin [plugins-codecs-csv]
|
||||
|
||||
|
||||
* Plugin version: v1.1.0
|
||||
* Released on: 2021-07-28
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-csv/blob/v1.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-csv-index.md).
|
||||
|
||||
## Installation [_installation_68]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-codec-csv`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_177]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-csv). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_176]
|
||||
|
||||
The csv codec takes CSV data, parses it and passes it along.
|
||||
|
||||
|
||||
## Compatibility with the Elastic Common Schema (ECS) [plugins-codecs-csv-ecs]
|
||||
|
||||
The plugin behaves the same regardless of ECS compatibility, except giving a warning when ECS is enabled and `target` isn’t set.
|
||||
|
||||
::::{tip}
|
||||
Set the `target` option to avoid potential schema conflicts.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Csv Codec configuration options [plugins-codecs-csv-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`autodetect_column_names`](#plugins-codecs-csv-autodetect_column_names) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`autogenerate_column_names`](#plugins-codecs-csv-autogenerate_column_names) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`charset`](#plugins-codecs-csv-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
| [`columns`](#plugins-codecs-csv-columns) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`convert`](#plugins-codecs-csv-convert) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`ecs_compatibility`](#plugins-codecs-csv-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`include_headers`](#plugins-codecs-csv-include_headers) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`quote_char`](#plugins-codecs-csv-quote_char) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`separator`](#plugins-codecs-csv-separator) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`skip_empty_columns`](#plugins-codecs-csv-skip_empty_columns) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`target`](#plugins-codecs-csv-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `autodetect_column_names` [plugins-codecs-csv-autodetect_column_names]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Define whether column names should be auto-detected from the header column or not. Defaults to false.
|
||||
|
||||
|
||||
### `autogenerate_column_names` [plugins-codecs-csv-autogenerate_column_names]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Define whether column names should be autogenerated or not. Defaults to true. If set to false, columns not having a header specified will not be parsed.
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-csv-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
List of valid conversion types used for the convert option The character encoding used in this codec. Examples include "UTF-8" and "CP1252".
|
||||
|
||||
|
||||
### `columns` [plugins-codecs-csv-columns]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
**When decoding:** Define a list of column names (in the order they appear in the CSV, as if it were a header line). If `columns` is not configured, or there are not enough columns specified, the default column names are "column1", "column2", etc.
|
||||
|
||||
**When encoding:** List of fields names to include in the encoded CSV, in the order listed.
|
||||
|
||||
|
||||
### `convert` [plugins-codecs-csv-convert]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
Define a set of datatype conversions to be applied to columns. Possible conversions are: `integer`, `float`, `date`, `date_time`, `boolean`
|
||||
|
||||
**Example**
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
csv {
|
||||
convert => { "column1" => "integer", "column2" => "boolean" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-codecs-csv-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: CSV data added at root level
|
||||
* `v1`,`v8`: Elastic Common Schema compliant behavior (`[event][original]` is also added)
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)).
|
||||
|
||||
|
||||
### `include_headers` [plugins-codecs-csv-include_headers]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
When **encoding** in an output plugin, include headers in the encoded CSV once per codec lifecyle (not for every event). Default ⇒ false
|
||||
|
||||
|
||||
### `quote_char` [plugins-codecs-csv-quote_char]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"\""`
|
||||
|
||||
Define the character used to quote CSV fields. If this is not specified the default is a double quote `"`. Optional.
|
||||
|
||||
|
||||
### `separator` [plugins-codecs-csv-separator]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `","`
|
||||
|
||||
Define the column separator value. If this is not specified, the default is a comma `,`. Optional.
|
||||
|
||||
|
||||
### `skip_empty_columns` [plugins-codecs-csv-skip_empty_columns]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Define whether empty columns should be skipped. Defaults to false. If set to true, columns containing no value will not be included.
|
||||
|
||||
|
||||
### `target` [plugins-codecs-csv-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define the target field for placing the row values. If this setting is not set, the CSV data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want data to be put under the `document` field:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
file {
|
||||
codec => csv {
|
||||
autodetect_column_names => true
|
||||
target => "[document]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
navigation_title: "dots"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-dots.html
|
||||
---
|
||||
|
||||
# Dots codec plugin [plugins-codecs-dots]
|
||||
|
||||
|
||||
* Plugin version: v3.0.6
|
||||
* Released on: 2017-11-07
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-dots/blob/v3.0.6/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-dots-index.md).
|
||||
|
||||
## Getting help [_getting_help_178]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-dots). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_177]
|
||||
|
||||
This codec generates a dot(`.`) to represent each Event it processes. This is typically used with `stdout` output to provide feedback on the terminal. It is also used to measure Logstash’s throughtput with the `pv` command.
|
||||
|
||||
|
|
@ -1,56 +0,0 @@
|
|||
---
|
||||
navigation_title: "edn"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-edn.html
|
||||
---
|
||||
|
||||
# Edn codec plugin [plugins-codecs-edn]
|
||||
|
||||
|
||||
* Plugin version: v3.1.0
|
||||
* Released on: 2021-08-04
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-edn/blob/v3.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-edn-index.md).
|
||||
|
||||
## Getting help [_getting_help_179]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-edn). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_178]
|
||||
|
||||
Reads and produces EDN format data.
|
||||
|
||||
|
||||
## Edn Codec configuration options [plugins-codecs-edn-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`target`](#plugins-codecs-edn-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `target` [plugins-codecs-edn-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
* The option is only relevant while decoding.
|
||||
|
||||
Define the target field for placing the decoded fields. If this setting is not set, data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want data to be put under the `document` field:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
tcp {
|
||||
port => 4242
|
||||
codec => edn {
|
||||
target => "[document]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -1,56 +0,0 @@
|
|||
---
|
||||
navigation_title: "edn_lines"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-edn_lines.html
|
||||
---
|
||||
|
||||
# Edn_lines codec plugin [plugins-codecs-edn_lines]
|
||||
|
||||
|
||||
* Plugin version: v3.1.0
|
||||
* Released on: 2021-08-04
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-edn_lines/blob/v3.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-edn_lines-index.md).
|
||||
|
||||
## Getting help [_getting_help_180]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-edn_lines). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_179]
|
||||
|
||||
Reads and produces newline-delimited EDN format data.
|
||||
|
||||
|
||||
## Edn_lines Codec configuration options [plugins-codecs-edn_lines-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`target`](#plugins-codecs-edn_lines-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `target` [plugins-codecs-edn_lines-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
* The option is only relevant while decoding.
|
||||
|
||||
Define the target field for placing the decoded fields. If this setting is not set, data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want data to be put under the `document` field:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
tcp {
|
||||
port => 4242
|
||||
codec => edn_lines {
|
||||
target => "[document]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -1,79 +0,0 @@
|
|||
---
|
||||
navigation_title: "es_bulk"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-es_bulk.html
|
||||
---
|
||||
|
||||
# Es_bulk codec plugin [plugins-codecs-es_bulk]
|
||||
|
||||
|
||||
* Plugin version: v3.1.0
|
||||
* Released on: 2021-08-19
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-es_bulk/blob/v3.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-es_bulk-index.md).
|
||||
|
||||
## Getting help [_getting_help_181]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-es_bulk). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_180]
|
||||
|
||||
This codec will decode the [Elasticsearch bulk format](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) into individual events, plus metadata into the `@metadata` field.
|
||||
|
||||
Encoding is not supported at this time as the Elasticsearch output submits Logstash events in bulk format.
|
||||
|
||||
|
||||
## Codec settings in the `logstash-input-http` plugin [plugins-codecs-es_bulk-codec-settings]
|
||||
|
||||
The [input-http](/reference/plugins-inputs-http.md) plugin has two configuration options for codecs: `codec` and `additional_codecs`.
|
||||
|
||||
Values in `additional_codecs` are prioritized over those specified in the `codec` option. That is, the default `codec` is applied only if no codec for the request’s content-type is found in the `additional_codecs` setting.
|
||||
|
||||
|
||||
## Event Metadata and the Elastic Common Schema (ECS) [plugins-codecs-es_bulk-ecs_metadata]
|
||||
|
||||
When ECS compatibility is disabled, the metadata is stored in the `[@metadata]` field. When ECS is enabled, the metadata is stored in the `[@metadata][codec][es_bulk]` field.
|
||||
|
||||
|
||||
## ES Bulk Codec Configuration Options [plugins-codecs-es_bulk-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`ecs_compatibility`](#plugins-codecs-es_bulk-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target`](#plugins-codecs-es_bulk-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
### `ecs_compatibility` [plugins-codecs-es_bulk-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: unstructured metadata added at @metadata
|
||||
* `v1`: uses `[@metadata][codec][es_bulk]` fields
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)).
|
||||
|
||||
|
||||
### `target` [plugins-codecs-es_bulk-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define the target field for placing the values. If this setting is not set, the data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want data to be put under the `document` field:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
kafka {
|
||||
codec => es_bulk {
|
||||
target => "[document]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -1,87 +0,0 @@
|
|||
---
|
||||
navigation_title: "fluent"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-fluent.html
|
||||
---
|
||||
|
||||
# Fluent codec plugin [plugins-codecs-fluent]
|
||||
|
||||
|
||||
* Plugin version: v3.4.3
|
||||
* Released on: 2024-06-25
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-fluent/blob/v3.4.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-fluent-index.md).
|
||||
|
||||
## Getting help [_getting_help_182]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-fluent). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_181]
|
||||
|
||||
This codec handles fluentd’s msgpack schema.
|
||||
|
||||
For example, you can receive logs from `fluent-logger-ruby` with:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
tcp {
|
||||
codec => fluent
|
||||
port => 4000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
And from your ruby code in your own application:
|
||||
|
||||
```ruby
|
||||
logger = Fluent::Logger::FluentLogger.new(nil, :host => "example.log", :port => 4000)
|
||||
logger.post("some_tag", { "your" => "data", "here" => "yay!" })
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Fluent uses second-precision for events, so you will not see sub-second precision on events processed by this codec.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Fluent Codec configuration options [plugins-codecs-fluent-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`nanosecond_precision`](#plugins-codecs-fluent-nanosecond_precision) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`target`](#plugins-codecs-fluent-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `nanosecond_precision` [plugins-codecs-fluent-nanosecond_precision]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Enables sub-second level precision while encoding events.
|
||||
|
||||
|
||||
### `target` [plugins-codecs-fluent-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define the target field for placing the decoded values. If this setting is not set, data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want data to be put under the `logs` field:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
tcp {
|
||||
codec => fluent {
|
||||
target => "[logs]"
|
||||
}
|
||||
port => 4000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -1,93 +0,0 @@
|
|||
---
|
||||
navigation_title: "graphite"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-graphite.html
|
||||
---
|
||||
|
||||
# Graphite codec plugin [plugins-codecs-graphite]
|
||||
|
||||
|
||||
* Plugin version: v3.0.6
|
||||
* Released on: 2021-08-12
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-graphite/blob/v3.0.6/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-graphite-index.md).
|
||||
|
||||
## Getting help [_getting_help_183]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-graphite). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_182]
|
||||
|
||||
This codec will encode and decode Graphite formated lines.
|
||||
|
||||
|
||||
## Graphite Codec Configuration Options [plugins-codecs-graphite-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`exclude_metrics`](#plugins-codecs-graphite-exclude_metrics) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`fields_are_metrics`](#plugins-codecs-graphite-fields_are_metrics) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`include_metrics`](#plugins-codecs-graphite-include_metrics) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`metrics`](#plugins-codecs-graphite-metrics) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`metrics_format`](#plugins-codecs-graphite-metrics_format) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `exclude_metrics` [plugins-codecs-graphite-exclude_metrics]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["%{[^}]+}"]`
|
||||
|
||||
Exclude regex matched metric names, by default exclude unresolved `%{{field}}` strings
|
||||
|
||||
|
||||
### `fields_are_metrics` [plugins-codecs-graphite-fields_are_metrics]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Indicate that the event @fields should be treated as metrics and will be sent as is to graphite
|
||||
|
||||
|
||||
### `include_metrics` [plugins-codecs-graphite-include_metrics]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[".*"]`
|
||||
|
||||
Include only regex matched metric names
|
||||
|
||||
|
||||
### `metrics` [plugins-codecs-graphite-metrics]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
The metric(s) to use. This supports dynamic strings like `%{{host}}` for metric names and also for values. This is a hash field with key of the metric name, value of the metric value. Example:
|
||||
|
||||
```ruby
|
||||
[ "%{host}/uptime", "%{uptime_1m}" ]
|
||||
```
|
||||
|
||||
The value will be coerced to a floating point value. Values which cannot be coerced will zero (0)
|
||||
|
||||
|
||||
### `metrics_format` [plugins-codecs-graphite-metrics_format]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"*"`
|
||||
|
||||
Defines format of the metric string. The placeholder `*` will be replaced with the name of the actual metric. This supports dynamic strings like `%{{host}}`.
|
||||
|
||||
```ruby
|
||||
metrics_format => "%{host}.foo.bar.*.sum"
|
||||
```
|
||||
|
||||
::::{note}
|
||||
If no metrics_format is defined the name of the metric will be used as fallback.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,51 +0,0 @@
|
|||
---
|
||||
navigation_title: "gzip_lines"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-gzip_lines.html
|
||||
---
|
||||
|
||||
# Gzip_lines codec plugin [plugins-codecs-gzip_lines]
|
||||
|
||||
|
||||
* Plugin version: v3.0.4
|
||||
* Released on: 2019-07-23
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-gzip_lines/blob/v3.0.4/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-gzip_lines-index.md).
|
||||
|
||||
## Installation [_installation_69]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-codec-gzip_lines`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_184]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-gzip_lines). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_183]
|
||||
|
||||
This codec will read gzip encoded content
|
||||
|
||||
|
||||
## Gzip_lines Codec Configuration Options [plugins-codecs-gzip_lines-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`charset`](#plugins-codecs-gzip_lines-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-gzip_lines-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
The character encoding used in this codec. Examples include "UTF-8" and "CP1252"
|
||||
|
||||
JSON requires valid UTF-8 strings, but in some cases, software that emits JSON does so in another encoding (nxlog, for example). In weird cases like this, you can set the charset setting to the actual encoding of the text and logstash will convert it for you.
|
||||
|
||||
For nxlog users, you’ll want to set this to "CP1252"
|
||||
|
||||
|
||||
|
|
@ -1,60 +0,0 @@
|
|||
---
|
||||
navigation_title: "java_line"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-java_line.html
|
||||
---
|
||||
|
||||
# Java_line codec plugin [plugins-codecs-java_line]
|
||||
|
||||
|
||||
**{{ls}} Core Plugin.** The java_line codec plugin cannot be installed or uninstalled independently of {{ls}}.
|
||||
|
||||
## Getting help [_getting_help_186]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash).
|
||||
|
||||
|
||||
## Description [_description_185]
|
||||
|
||||
Encodes and decodes line-oriented text data.
|
||||
|
||||
Decoding behavior: All text data between specified delimiters will be decoded as distinct events.
|
||||
|
||||
Encoding behavior: Each event will be emitted with the specified trailing delimiter.
|
||||
|
||||
|
||||
## Java_line Codec Configuration Options [plugins-codecs-java_line-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`charset`](#plugins-codecs-java_line-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
| [`delimiter`](#plugins-codecs-java_line-delimiter) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`format`](#plugins-codecs-java_line-format) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-java_line-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
The character encoding used by this input. Examples include `UTF-8` and `cp1252`. This setting is useful if your inputs are in `Latin-1` (aka `cp1252`) or other character sets than `UTF-8`.
|
||||
|
||||
|
||||
### `delimiter` [plugins-codecs-java_line-delimiter]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is the system-dependent line separator ("\n" for UNIX systems; "\r\n" for Microsoft Windows)
|
||||
|
||||
Specifies the delimiter that indicates end-of-line.
|
||||
|
||||
|
||||
### `format` [plugins-codecs-java_line-format]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the desired text format for encoding in [`sprintf`](/reference/event-dependent-configuration.md#sprintf) format.
|
||||
|
||||
|
||||
|
|
@ -1,47 +0,0 @@
|
|||
---
|
||||
navigation_title: "java_plain"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-java_plain.html
|
||||
---
|
||||
|
||||
# Java_plain codec plugin [plugins-codecs-java_plain]
|
||||
|
||||
|
||||
**{{ls}} Core Plugin.** The java_plain codec plugin cannot be installed or uninstalled independently of {{ls}}.
|
||||
|
||||
## Getting help [_getting_help_187]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash).
|
||||
|
||||
|
||||
## Description [_description_186]
|
||||
|
||||
The `java_plain` codec is for text data with no delimiters between events. It is useful mainly for inputs and outputs that already have a defined framing in their transport protocol such as ZeroMQ, RabbitMQ, Redis, etc.
|
||||
|
||||
|
||||
## Java_plain Codec Configuration Options [plugins-codecs-java_plain-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`charset`](#plugins-codecs-java_plain-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
| [`format`](#plugins-codecs-java_plain-format) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-java_plain-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
The character encoding used in this input. Examples include `UTF-8` and `cp1252`. This setting is useful if your data is in a character set other than `UTF-8`.
|
||||
|
||||
|
||||
### `format` [plugins-codecs-java_plain-format]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the desired text format for encoding in [`sprintf`](/reference/event-dependent-configuration.md#sprintf) format.
|
||||
|
||||
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
navigation_title: "jdots"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-jdots.html
|
||||
---
|
||||
|
||||
# Jdots codec plugin [plugins-codecs-jdots]
|
||||
|
||||
|
||||
**{{ls}} Core Plugin.** The jdots codec plugin cannot be installed or uninstalled independently of {{ls}}.
|
||||
|
||||
## Getting help [_getting_help_185]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash).
|
||||
|
||||
|
||||
## Description [_description_184]
|
||||
|
||||
This codec renders each processed event as a dot (`.`). It is typically used with the `java_stdout` output to provide approximate event throughput. It is especially useful when combined with `pv` and `wc -c` as follows:
|
||||
|
||||
```bash
|
||||
bin/logstash -f /path/to/config/with/jdots/codec | pv | wc -c
|
||||
```
|
||||
|
||||
|
|
@ -1,91 +0,0 @@
|
|||
---
|
||||
navigation_title: "json"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-json.html
|
||||
---
|
||||
|
||||
# Json codec plugin [plugins-codecs-json]
|
||||
|
||||
|
||||
* Plugin version: v3.1.1
|
||||
* Released on: 2022-10-03
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-json/blob/v3.1.1/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-json-index.md).
|
||||
|
||||
## Getting help [_getting_help_188]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-json). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_187]
|
||||
|
||||
This codec may be used to decode (via inputs) and encode (via outputs) full JSON messages. If the data being sent is a JSON array at its root multiple events will be created (one per element).
|
||||
|
||||
If you are streaming JSON messages delimited by *\n* then see the `json_lines` codec.
|
||||
|
||||
Encoding will result in a compact JSON representation (no line terminators or indentation)
|
||||
|
||||
If this codec recieves a payload from an input that is not valid JSON, then it will fall back to plain text and add a tag `_jsonparsefailure`. Upon a JSON failure, the payload will be stored in the `message` field.
|
||||
|
||||
|
||||
## Json Codec Configuration Options [plugins-codecs-json-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`charset`](#plugins-codecs-json-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
| [`ecs_compatibility`](#plugins-codecs-json-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target`](#plugins-codecs-json-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-json-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
The character encoding used in this codec. Examples include "UTF-8" and "CP1252".
|
||||
|
||||
JSON requires valid UTF-8 strings, but in some cases, software that emits JSON does so in another encoding (nxlog, for example). In weird cases like this, you can set the `charset` setting to the actual encoding of the text and Logstash will convert it for you.
|
||||
|
||||
For nxlog users, you may to set this to "CP1252".
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-codecs-json-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: JSON document data added at root level
|
||||
* `v1`,`v8`: Elastic Common Schema compliant behavior (warns when `target` isn’t set)
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)).
|
||||
|
||||
|
||||
### `target` [plugins-codecs-json-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define the target field for placing the parsed data. If this setting is not set, the JSON data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want data to be put under the `document` field:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
http {
|
||||
codec => json {
|
||||
target => "[document]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -1,103 +0,0 @@
|
|||
---
|
||||
navigation_title: "json_lines"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-json_lines.html
|
||||
---
|
||||
|
||||
# Json_lines codec plugin [plugins-codecs-json_lines]
|
||||
|
||||
|
||||
* Plugin version: v3.2.2
|
||||
* Released on: 2024-09-06
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-json_lines/blob/v3.2.2/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-json_lines-index.md).
|
||||
|
||||
## Getting help [_getting_help_189]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-json_lines). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_188]
|
||||
|
||||
This codec will decode streamed JSON that is newline delimited. Encoding will emit a single JSON string ending in a `@delimiter` NOTE: Do not use this codec if your source input is line-oriented JSON, for example, redis or file inputs. Rather, use the json codec. More info: This codec is expecting to receive a stream (string) of newline terminated lines. The file input will produce a line string without a newline. Therefore this codec cannot work with line oriented inputs.
|
||||
|
||||
|
||||
## Json_lines Codec Configuration Options [plugins-codecs-json_lines-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`charset`](#plugins-codecs-json_lines-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
| [`decode_size_limit_bytes`](#plugins-codecs-json_lines-decode_size_limit_bytes) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`delimiter`](#plugins-codecs-json_lines-delimiter) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ecs_compatibility`](#plugins-codecs-json_lines-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target`](#plugins-codecs-json_lines-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-json_lines-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
The character encoding used in this codec. Examples include `UTF-8` and `CP1252`
|
||||
|
||||
JSON requires valid `UTF-8` strings, but in some cases, software that emits JSON does so in another encoding (nxlog, for example). In weird cases like this, you can set the charset setting to the actual encoding of the text and logstash will convert it for you.
|
||||
|
||||
For nxlog users, you’ll want to set this to `CP1252`
|
||||
|
||||
|
||||
### `decode_size_limit_bytes` [plugins-codecs-json_lines-decode_size_limit_bytes]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is 512 MB
|
||||
|
||||
Maximum number of bytes for a single line before stop processing.
|
||||
|
||||
|
||||
### `delimiter` [plugins-codecs-json_lines-delimiter]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"\n"`
|
||||
|
||||
Change the delimiter that separates lines
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-codecs-json_lines-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: does not use ECS-compatible field names
|
||||
* `v1`, `v8`: Elastic Common Schema compliant behavior (warns when `target` isn’t set)
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)).
|
||||
|
||||
|
||||
### `target` [plugins-codecs-json_lines-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define the target field for placing the parsed data. If this setting is not set, the JSON data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want data to be put under the `document` field:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
http {
|
||||
codec => json_lines {
|
||||
target => "[document]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -1,75 +0,0 @@
|
|||
---
|
||||
navigation_title: "line"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-line.html
|
||||
---
|
||||
|
||||
# Line codec plugin [plugins-codecs-line]
|
||||
|
||||
|
||||
* Plugin version: v3.1.1
|
||||
* Released on: 2021-07-15
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-line/blob/v3.1.1/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-line-index.md).
|
||||
|
||||
## Getting help [_getting_help_190]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-line). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_189]
|
||||
|
||||
Reads line-oriented text data.
|
||||
|
||||
Decoding behavior
|
||||
: Only whole line events are emitted.
|
||||
|
||||
Encoding behavior
|
||||
: Each event is emitted with a trailing newline.
|
||||
|
||||
|
||||
## Compatibility with the Elastic Common Schema (ECS) [plugins-codecs-line-ecs]
|
||||
|
||||
This plugin is compatible with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). No additional configuration is required.
|
||||
|
||||
|
||||
## Line codec configuration options [plugins-codecs-line-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`charset`](#plugins-codecs-line-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
| [`delimiter`](#plugins-codecs-line-delimiter) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`format`](#plugins-codecs-line-format) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-line-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
The character encoding used in this input. Examples include `UTF-8` and `cp1252`
|
||||
|
||||
This setting is useful if your log files are in `Latin-1` (aka `cp1252`) or in another character set other than `UTF-8`.
|
||||
|
||||
This only affects "plain" format logs since json is `UTF-8` already.
|
||||
|
||||
|
||||
### `delimiter` [plugins-codecs-line-delimiter]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"\n"`
|
||||
|
||||
Change the delimiter that separates lines
|
||||
|
||||
|
||||
### `format` [plugins-codecs-line-format]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the desired text format for encoding.
|
||||
|
||||
|
||||
|
|
@ -1,62 +0,0 @@
|
|||
---
|
||||
navigation_title: "msgpack"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-msgpack.html
|
||||
---
|
||||
|
||||
# Msgpack codec plugin [plugins-codecs-msgpack]
|
||||
|
||||
|
||||
* Plugin version: v3.1.0
|
||||
* Released on: 2021-08-09
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-msgpack/blob/v3.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-msgpack-index.md).
|
||||
|
||||
## Getting help [_getting_help_191]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-msgpack). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_190]
|
||||
|
||||
This codec reads and produces MessagePack encoded content.
|
||||
|
||||
|
||||
## Msgpack Codec configuration options [plugins-codecs-msgpack-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`format`](#plugins-codecs-msgpack-format) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target`](#plugins-codecs-msgpack-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `format` [plugins-codecs-msgpack-format]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
|
||||
### `target` [plugins-codecs-msgpack-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define the target field for placing the decoded values. If this setting is not set, data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want data to be put under the `document` field:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
tcp {
|
||||
port => 4242
|
||||
codec => msgpack {
|
||||
target => "[document]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -1,225 +0,0 @@
|
|||
---
|
||||
navigation_title: "multiline"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html
|
||||
---
|
||||
|
||||
# Multiline codec plugin [plugins-codecs-multiline]
|
||||
|
||||
|
||||
* Plugin version: v3.1.2
|
||||
* Released on: 2024-04-25
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-multiline/blob/v3.1.2/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-multiline-index.md).
|
||||
|
||||
## Getting help [_getting_help_192]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-multiline). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_191]
|
||||
|
||||
The multiline codec will collapse multiline messages and merge them into a single event.
|
||||
|
||||
::::{important}
|
||||
If you are using a Logstash input plugin that supports multiple hosts, such as the [beats input plugin](/reference/plugins-inputs-beats.md), you should not use the multiline codec to handle multiline events. Doing so may result in the mixing of streams and corrupted event data. In this situation, you need to handle multiline events before sending the event data to Logstash.
|
||||
::::
|
||||
|
||||
|
||||
The original goal of this codec was to allow joining of multiline messages from files into a single event. For example, joining Java exception and stacktrace messages into a single event.
|
||||
|
||||
The config looks like this:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
stdin {
|
||||
codec => multiline {
|
||||
pattern => "pattern, a regexp"
|
||||
negate => "true" or "false"
|
||||
what => "previous" or "next"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The `pattern` should match what you believe to be an indicator that the field is part of a multi-line event.
|
||||
|
||||
The `what` must be `previous` or `next` and indicates the relation to the multi-line event.
|
||||
|
||||
The `negate` can be `true` or `false` (defaults to `false`). If `true`, a message not matching the pattern will constitute a match of the multiline filter and the `what` will be applied. (vice-versa is also true)
|
||||
|
||||
For example, Java stack traces are multiline and usually have the message starting at the far-left, with each subsequent line indented. Do this:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
stdin {
|
||||
codec => multiline {
|
||||
pattern => "^\s"
|
||||
what => "previous"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This says that any line starting with whitespace belongs to the previous line.
|
||||
|
||||
Another example is to merge lines not starting with a date up to the previous line..
|
||||
|
||||
```ruby
|
||||
input {
|
||||
file {
|
||||
path => "/var/log/someapp.log"
|
||||
codec => multiline {
|
||||
# Grok pattern names are valid! :)
|
||||
pattern => "^%{TIMESTAMP_ISO8601} "
|
||||
negate => true
|
||||
what => "previous"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This says that any line not starting with a timestamp should be merged with the previous line.
|
||||
|
||||
One more common example is C line continuations (backslash). Here’s how to do that:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
stdin {
|
||||
codec => multiline {
|
||||
pattern => "\\$"
|
||||
what => "next"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This says that any line ending with a backslash should be combined with the following line.
|
||||
|
||||
|
||||
## Multiline codec configuration options [plugins-codecs-multiline-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`auto_flush_interval`](#plugins-codecs-multiline-auto_flush_interval) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`charset`](#plugins-codecs-multiline-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
| [`ecs_compatibility`](#plugins-codecs-multiline-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`max_bytes`](#plugins-codecs-multiline-max_bytes) | [bytes](/reference/configuration-file-structure.md#bytes) | No |
|
||||
| [`max_lines`](#plugins-codecs-multiline-max_lines) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`multiline_tag`](#plugins-codecs-multiline-multiline_tag) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`negate`](#plugins-codecs-multiline-negate) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`pattern`](#plugins-codecs-multiline-pattern) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`patterns_dir`](#plugins-codecs-multiline-patterns_dir) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`what`](#plugins-codecs-multiline-what) | [string](/reference/configuration-file-structure.md#string), one of `["previous", "next"]` | Yes |
|
||||
|
||||
|
||||
|
||||
### `auto_flush_interval` [plugins-codecs-multiline-auto_flush_interval]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The accumulation of multiple lines will be converted to an event when either a matching new line is seen or there has been no new data appended for this many seconds. No default. If unset, no auto_flush. Units: seconds
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-multiline-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
The character encoding used in this input. Examples include `UTF-8` and `cp1252`
|
||||
|
||||
This setting is useful if your log files are in `Latin-1` (aka `cp1252`) or in another character set other than `UTF-8`.
|
||||
|
||||
This only affects "plain" format logs since JSON is `UTF-8` already.
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-codecs-multiline-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: plugin only sets the `message` field
|
||||
* `v1`,`v8`: Elastic Common Schema compliant behavior (`[event][original]` is also added)
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)).
|
||||
|
||||
|
||||
### `max_bytes` [plugins-codecs-multiline-max_bytes]
|
||||
|
||||
* Value type is [bytes](/reference/configuration-file-structure.md#bytes)
|
||||
* Default value is `"10 MiB"`
|
||||
|
||||
The accumulation of events can make logstash exit with an out of memory error if event boundaries are not correctly defined. This settings make sure to flush multiline events after reaching a number of bytes, it is used in combination max_lines.
|
||||
|
||||
|
||||
### `max_lines` [plugins-codecs-multiline-max_lines]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `500`
|
||||
|
||||
The accumulation of events can make logstash exit with an out of memory error if event boundaries are not correctly defined. This settings make sure to flush multiline events after reaching a number of lines, it is used in combination max_bytes.
|
||||
|
||||
|
||||
### `multiline_tag` [plugins-codecs-multiline-multiline_tag]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"multiline"`
|
||||
|
||||
Tag multiline events with a given tag. This tag will only be added to events that actually have multiple lines in them.
|
||||
|
||||
|
||||
### `negate` [plugins-codecs-multiline-negate]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Negate the regexp pattern (*if not matched*).
|
||||
|
||||
|
||||
### `pattern` [plugins-codecs-multiline-pattern]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The regular expression to match.
|
||||
|
||||
|
||||
### `patterns_dir` [plugins-codecs-multiline-patterns_dir]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
Logstash ships by default with a bunch of patterns, so you don’t necessarily need to define this yourself unless you are adding additional patterns.
|
||||
|
||||
Pattern files are plain text with format:
|
||||
|
||||
```ruby
|
||||
NAME PATTERN
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```ruby
|
||||
NUMBER \d+
|
||||
```
|
||||
|
||||
|
||||
### `what` [plugins-codecs-multiline-what]
|
||||
|
||||
* This is a required setting.
|
||||
* Value can be any of: `previous`, `next`
|
||||
* There is no default value for this setting.
|
||||
|
||||
If the pattern matched, does event belong to the next or previous event?
|
||||
|
||||
|
||||
|
|
@ -1,207 +0,0 @@
|
|||
---
|
||||
navigation_title: "netflow"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-netflow.html
|
||||
---
|
||||
|
||||
# Netflow codec plugin [plugins-codecs-netflow]
|
||||
|
||||
|
||||
* Plugin version: v4.3.2
|
||||
* Released on: 2023-12-22
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-netflow/blob/v4.3.2/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-netflow-index.md).
|
||||
|
||||
## Getting help [_getting_help_193]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-netflow). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_192]
|
||||
|
||||
The "netflow" codec is used for decoding Netflow v5/v9/v10 (IPFIX) flows.
|
||||
|
||||
|
||||
## Supported Netflow/IPFIX exporters [_supported_netflowipfix_exporters]
|
||||
|
||||
This codec supports:
|
||||
|
||||
* Netflow v5
|
||||
* Netflow v9
|
||||
* IPFIX
|
||||
|
||||
The following Netflow/IPFIX exporters have been seen and tested with the most recent version of the Netflow Codec:
|
||||
|
||||
| Netflow exporter | v5 | v9 | IPFIX | Remarks |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| Barracuda Firewall | | | y | With support for Extended Uniflow |
|
||||
| Cisco ACI | | y | | |
|
||||
| Cisco ASA | | y | | |
|
||||
| Cisco ASR 1k | | | N | Fails because of duplicate fields |
|
||||
| Cisco ASR 9k | | y | | |
|
||||
| Cisco IOS 12.x | | y | | |
|
||||
| Cisco ISR w/ HSL | | N | | Fails because of duplicate fields, see: [https://github.com/logstash-plugins/logstash-codec-netflow/issues/93](https://github.com/logstash-plugins/logstash-codec-netflow/issues/93) |
|
||||
| Cisco WLC | | y | | |
|
||||
| Citrix Netscaler | | | y | Still some unknown fields, labeled netscalerUnknown<id> |
|
||||
| fprobe | y | | | |
|
||||
| Fortigate FortiOS | | y | | |
|
||||
| Huawei Netstream | | y | | |
|
||||
| ipt_NETFLOW | y | y | y | |
|
||||
| IXIA packet broker | | | y | |
|
||||
| Juniper MX | y | | y | SW > 12.3R8. Fails to decode IPFIX from Junos 16.1 due to duplicate field names which we currently don’t support. |
|
||||
| Mikrotik | y | | y | [http://wiki.mikrotik.com/wiki/Manual:IP/Traffic_Flow](http://wiki.mikrotik.com/wiki/Manual:IP/Traffic_Flow) |
|
||||
| nProbe | y | y | y | L7 DPI fields now also supported |
|
||||
| Nokia BRAS | | | y | |
|
||||
| OpenBSD pflow | y | N | y | [http://man.openbsd.org/OpenBSD-current/man4/pflow.4](http://man.openbsd.org/OpenBSD-current/man4/pflow.4) |
|
||||
| Riverbed | | N | | Not supported due to field ID conflicts. Workaround available in the definitions directory over at Elastiflow [https://github.com/robcowart/elastiflow](https://github.com/robcowart/elastiflow) |
|
||||
| Sandvine Procera PacketLogic | | | y | v15.1 |
|
||||
| Softflowd | y | y | y | IPFIX supported in [https://github.com/djmdjm/softflowd](https://github.com/djmdjm/softflowd) |
|
||||
| Sophos UTM | | | y | |
|
||||
| Streamcore Streamgroomer | | y | | |
|
||||
| Palo Alto PAN-OS | | y | | |
|
||||
| Ubiquiti Edgerouter X | | y | | With MPLS labels |
|
||||
| VMware VDS | | | y | Still some unknown fields |
|
||||
| YAF | | | y | With silk and applabel, but no DPI plugin support |
|
||||
| vIPtela | | | y | |
|
||||
|
||||
|
||||
## Usage [_usage_7]
|
||||
|
||||
Example Logstash configuration that will listen on 2055/udp for Netflow v5,v9 and IPFIX:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
udp {
|
||||
port => 2055
|
||||
codec => netflow
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For high-performance production environments the configuration below will decode up to 15000 flows/sec from a Cisco ASR 9000 router on a dedicated 16 CPU instance. If your total flowrate exceeds 15000 flows/sec, you should use multiple Logstash instances.
|
||||
|
||||
Note that for richer flows from a Cisco ASA firewall this number will be at least 3x lower.
|
||||
|
||||
```ruby
|
||||
input {
|
||||
udp {
|
||||
port => 2055
|
||||
codec => netflow
|
||||
receive_buffer_bytes => 16777216
|
||||
workers => 16
|
||||
}
|
||||
```
|
||||
|
||||
To mitigate dropped packets, make sure to increase the Linux kernel receive buffer limit:
|
||||
|
||||
```
|
||||
# sysctl -w net.core.rmem_max=$((1024*1024*16))
|
||||
```
|
||||
|
||||
## Netflow Codec Configuration Options [plugins-codecs-netflow-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`cache_save_path`](#plugins-codecs-netflow-cache_save_path) | a valid filesystem path | No |
|
||||
| [`cache_ttl`](#plugins-codecs-netflow-cache_ttl) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`include_flowset_id`](#plugins-codecs-netflow-include_flowset_id) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`ipfix_definitions`](#plugins-codecs-netflow-ipfix_definitions) | a valid filesystem path | No |
|
||||
| [`netflow_definitions`](#plugins-codecs-netflow-netflow_definitions) | a valid filesystem path | No |
|
||||
| [`target`](#plugins-codecs-netflow-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`versions`](#plugins-codecs-netflow-versions) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
|
||||
|
||||
### `cache_save_path` [plugins-codecs-netflow-cache_save_path]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Enables the template cache and saves it in the specified directory. This minimizes data loss after Logstash restarts because the codec doesn’t have to wait for the arrival of templates, but instead reload already received templates received during previous runs.
|
||||
|
||||
Template caches are saved as:
|
||||
|
||||
* [path](/reference/configuration-file-structure.md#path)/netflow_templates.cache for Netflow v9 templates.
|
||||
* [path](/reference/configuration-file-structure.md#path)/ipfix_templates.cache for IPFIX templates.
|
||||
|
||||
|
||||
### `cache_ttl` [plugins-codecs-netflow-cache_ttl]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `4000`
|
||||
|
||||
Netflow v9/v10 template cache TTL (seconds)
|
||||
|
||||
|
||||
### `include_flowset_id` [plugins-codecs-netflow-include_flowset_id]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Only makes sense for ipfix, v9 already includes this Setting to true will include the flowset_id in events Allows you to work with sequences, for instance with the aggregate filter
|
||||
|
||||
|
||||
### `ipfix_definitions` [plugins-codecs-netflow-ipfix_definitions]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Override YAML file containing IPFIX field definitions
|
||||
|
||||
Very similar to the Netflow version except there is a top level Private Enterprise Number (PEN) key added:
|
||||
|
||||
```yaml
|
||||
pen:
|
||||
id:
|
||||
- :uintN or :ip4_addr or :ip6_addr or :mac_addr or :string
|
||||
- :name
|
||||
id:
|
||||
- :skip
|
||||
```
|
||||
|
||||
There is an implicit PEN 0 for the standard fields.
|
||||
|
||||
See [https://github.com/logstash-plugins/logstash-codec-netflow/blob/master/lib/logstash/codecs/netflow/ipfix.yaml](https://github.com/logstash-plugins/logstash-codec-netflow/blob/master/lib/logstash/codecs/netflow/ipfix.yaml) for the base set.
|
||||
|
||||
|
||||
### `netflow_definitions` [plugins-codecs-netflow-netflow_definitions]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Override YAML file containing Netflow field definitions
|
||||
|
||||
Each Netflow field is defined like so:
|
||||
|
||||
```yaml
|
||||
id:
|
||||
- default length in bytes
|
||||
- :name
|
||||
id:
|
||||
- :uintN or :ip4_addr or :ip6_addr or :mac_addr or :string
|
||||
- :name
|
||||
id:
|
||||
- :skip
|
||||
```
|
||||
|
||||
See [https://github.com/logstash-plugins/logstash-codec-netflow/blob/master/lib/logstash/codecs/netflow/netflow.yaml](https://github.com/logstash-plugins/logstash-codec-netflow/blob/master/lib/logstash/codecs/netflow/netflow.yaml) for the base set.
|
||||
|
||||
|
||||
### `target` [plugins-codecs-netflow-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"netflow"`
|
||||
|
||||
Specify into what field you want the Netflow data.
|
||||
|
||||
|
||||
### `versions` [plugins-codecs-netflow-versions]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[5, 9, 10]`
|
||||
|
||||
Specify which Netflow versions you will accept.
|
||||
|
||||
|
||||
|
|
@ -1,80 +0,0 @@
|
|||
---
|
||||
navigation_title: "nmap"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-nmap.html
|
||||
---
|
||||
|
||||
# Nmap codec plugin [plugins-codecs-nmap]
|
||||
|
||||
|
||||
* Plugin version: v0.0.22
|
||||
* Released on: 2022-11-16
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-nmap/blob/v0.0.22/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-nmap-index.md).
|
||||
|
||||
## Installation [_installation_70]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-codec-nmap`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_194]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-nmap). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_193]
|
||||
|
||||
This codec is used to parse [nmap](https://nmap.org/) output data which is serialized in XML format. Nmap ("Network Mapper") is a free and open source utility for network discovery and security auditing. For more information on nmap, see [https://nmap.org/](https://nmap.org/).
|
||||
|
||||
This codec can only be used for decoding data.
|
||||
|
||||
Event types are listed below
|
||||
|
||||
`nmap_scan_metadata`: An object containing top level information about the scan, including how many hosts were up, and how many were down. Useful for the case where you need to check if a DNS based hostname does not resolve, where both those numbers will be zero. `nmap_host`: One event is created per host. The full data covering an individual host, including open ports and traceroute information as a nested structure. `nmap_port`: One event is created per host/port. This duplicates data already in `nmap_host`: This was put in for the case where you want to model ports as separate documents in Elasticsearch (which Kibana prefers). `nmap_traceroute_link`: One of these is output per traceroute *connection*, with a `from` and a `to` object describing each hop. Note that traceroute hop data is not always correct due to the fact that each tracing ICMP packet may take a different route. Also very useful for Kibana visualizations.
|
||||
|
||||
|
||||
## Nmap Codec Configuration Options [plugins-codecs-nmap-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`emit_hosts`](#plugins-codecs-nmap-emit_hosts) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`emit_ports`](#plugins-codecs-nmap-emit_ports) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`emit_scan_metadata`](#plugins-codecs-nmap-emit_scan_metadata) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`emit_traceroute_links`](#plugins-codecs-nmap-emit_traceroute_links) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
|
||||
|
||||
|
||||
### `emit_hosts` [plugins-codecs-nmap-emit_hosts]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Emit all host data as a nested document (including ports + traceroutes) with the type *nmap_fullscan*
|
||||
|
||||
|
||||
### `emit_ports` [plugins-codecs-nmap-emit_ports]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Emit each port as a separate document with type *nmap_port*
|
||||
|
||||
|
||||
### `emit_scan_metadata` [plugins-codecs-nmap-emit_scan_metadata]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Emit scan metadata
|
||||
|
||||
|
||||
### `emit_traceroute_links` [plugins-codecs-nmap-emit_traceroute_links]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Emit each hop_tuple of the traceroute with type *nmap_traceroute_link*
|
||||
|
||||
|
||||
|
|
@ -1,77 +0,0 @@
|
|||
---
|
||||
navigation_title: "plain"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-plain.html
|
||||
---
|
||||
|
||||
# Plain codec plugin [plugins-codecs-plain]
|
||||
|
||||
|
||||
* Plugin version: v3.1.0
|
||||
* Released on: 2021-07-27
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-plain/blob/v3.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-plain-index.md).
|
||||
|
||||
## Getting help [_getting_help_195]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-plain). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_194]
|
||||
|
||||
The "plain" codec is for plain text with no delimiting between events.
|
||||
|
||||
This is mainly useful on inputs and outputs that already have a defined framing in their transport protocol (such as zeromq, rabbitmq, redis, etc).
|
||||
|
||||
|
||||
## Plain codec configuration options [plugins-codecs-plain-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`charset`](#plugins-codecs-plain-charset) | [string](/reference/configuration-file-structure.md#string), one of `["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB2312", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-31J", "Windows-1250", "Windows-1251", "Windows-1252", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "IBM037", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "EUC-JIS-2004", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "ebcdic-cp-us", "eucJP", "euc-jp-ms", "EUC-JISX0213", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "ISO8859-2", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP932", "csWindows31J", "SJIS", "PCK", "CP1250", "CP1251", "CP1252", "external", "locale"]` | No |
|
||||
| [`ecs_compatibility`](#plugins-codecs-plain-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`format`](#plugins-codecs-plain-format) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
|
||||
|
||||
### `charset` [plugins-codecs-plain-charset]
|
||||
|
||||
* Value can be any of: `ASCII-8BIT`, `UTF-8`, `US-ASCII`, `Big5`, `Big5-HKSCS`, `Big5-UAO`, `CP949`, `Emacs-Mule`, `EUC-JP`, `EUC-KR`, `EUC-TW`, `GB2312`, `GB18030`, `GBK`, `ISO-8859-1`, `ISO-8859-2`, `ISO-8859-3`, `ISO-8859-4`, `ISO-8859-5`, `ISO-8859-6`, `ISO-8859-7`, `ISO-8859-8`, `ISO-8859-9`, `ISO-8859-10`, `ISO-8859-11`, `ISO-8859-13`, `ISO-8859-14`, `ISO-8859-15`, `ISO-8859-16`, `KOI8-R`, `KOI8-U`, `Shift_JIS`, `UTF-16BE`, `UTF-16LE`, `UTF-32BE`, `UTF-32LE`, `Windows-31J`, `Windows-1250`, `Windows-1251`, `Windows-1252`, `IBM437`, `IBM737`, `IBM775`, `CP850`, `IBM852`, `CP852`, `IBM855`, `CP855`, `IBM857`, `IBM860`, `IBM861`, `IBM862`, `IBM863`, `IBM864`, `IBM865`, `IBM866`, `IBM869`, `Windows-1258`, `GB1988`, `macCentEuro`, `macCroatian`, `macCyrillic`, `macGreek`, `macIceland`, `macRoman`, `macRomania`, `macThai`, `macTurkish`, `macUkraine`, `CP950`, `CP951`, `IBM037`, `stateless-ISO-2022-JP`, `eucJP-ms`, `CP51932`, `EUC-JIS-2004`, `GB12345`, `ISO-2022-JP`, `ISO-2022-JP-2`, `CP50220`, `CP50221`, `Windows-1256`, `Windows-1253`, `Windows-1255`, `Windows-1254`, `TIS-620`, `Windows-874`, `Windows-1257`, `MacJapanese`, `UTF-7`, `UTF8-MAC`, `UTF-16`, `UTF-32`, `UTF8-DoCoMo`, `SJIS-DoCoMo`, `UTF8-KDDI`, `SJIS-KDDI`, `ISO-2022-JP-KDDI`, `stateless-ISO-2022-JP-KDDI`, `UTF8-SoftBank`, `SJIS-SoftBank`, `BINARY`, `CP437`, `CP737`, `CP775`, `IBM850`, `CP857`, `CP860`, `CP861`, `CP862`, `CP863`, `CP864`, `CP865`, `CP866`, `CP869`, `CP1258`, `Big5-HKSCS:2008`, `ebcdic-cp-us`, `eucJP`, `euc-jp-ms`, `EUC-JISX0213`, `eucKR`, `eucTW`, `EUC-CN`, `eucCN`, `CP936`, `ISO2022-JP`, `ISO2022-JP2`, `ISO8859-1`, `ISO8859-2`, `ISO8859-3`, `ISO8859-4`, `ISO8859-5`, `ISO8859-6`, `CP1256`, `ISO8859-7`, `CP1253`, `ISO8859-8`, `CP1255`, `ISO8859-9`, `CP1254`, `ISO8859-10`, `ISO8859-11`, `CP874`, `ISO8859-13`, `CP1257`, `ISO8859-14`, `ISO8859-15`, `ISO8859-16`, `CP878`, `MacJapan`, `ASCII`, `ANSI_X3.4-1968`, `646`, `CP65000`, `CP65001`, `UTF-8-MAC`, `UTF-8-HFS`, `UCS-2BE`, `UCS-4BE`, `UCS-4LE`, `CP932`, `csWindows31J`, `SJIS`, `PCK`, `CP1250`, `CP1251`, `CP1252`, `external`, `locale`
|
||||
* Default value is `"UTF-8"`
|
||||
|
||||
The character encoding used in this input. Examples include `UTF-8` and `cp1252`
|
||||
|
||||
This setting is useful if your log files are in `Latin-1` (aka `cp1252`) or in another character set other than `UTF-8`.
|
||||
|
||||
This only affects "plain" format logs since json is `UTF-8` already.
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-codecs-plain-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: plugin only sets the `message` field
|
||||
* `v1`,`v8`: Elastic Common Schema compliant behavior (`[event][original]` is also added)
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)).
|
||||
|
||||
|
||||
### `format` [plugins-codecs-plain-format]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the message you which to emit for each event. This supports `sprintf` strings.
|
||||
|
||||
This setting only affects outputs (encoding of events).
|
||||
|
||||
|
||||
|
|
@ -1,247 +0,0 @@
|
|||
---
|
||||
navigation_title: "protobuf"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-protobuf.html
|
||||
---
|
||||
|
||||
# Protobuf codec plugin [plugins-codecs-protobuf]
|
||||
|
||||
|
||||
* Plugin version: v1.3.0
|
||||
* Released on: 2023-09-20
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-protobuf/blob/v1.3.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-protobuf-index.md).
|
||||
|
||||
## Installation [_installation_71]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-codec-protobuf`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_196]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-protobuf). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_195]
|
||||
|
||||
This codec converts protobuf encoded messages into logstash events and vice versa. It supports the protobuf versions 2 and 3.
|
||||
|
||||
The plugin requires the protobuf definitions to be compiled to ruby files.<br> For protobuf 2 use the [ruby-protoc compiler](https://github.com/codekitchen/ruby-protocol-buffers).<br> For protobuf 3 use the [official google protobuf compiler](https://developers.google.com/protocol-buffers/docs/reference/ruby-generated).
|
||||
|
||||
The following shows a usage example (protobuf v2) for decoding events from a kafka stream:
|
||||
|
||||
```ruby
|
||||
kafka
|
||||
{
|
||||
topic_id => "..."
|
||||
key_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
|
||||
value_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
|
||||
codec => protobuf
|
||||
{
|
||||
class_name => "Animals::Mammals::Unicorn"
|
||||
class_file => '/path/to/pb_definitions/some_folder/Unicorn.pb.rb'
|
||||
protobuf_root_directory => "/path/to/pb_definitions/"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Decoder usage example for protobuf v3:
|
||||
|
||||
```ruby
|
||||
kafka
|
||||
{
|
||||
topic_id => "..."
|
||||
key_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
|
||||
value_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
|
||||
codec => protobuf
|
||||
{
|
||||
class_name => "Animals.Mammals.Unicorn"
|
||||
class_file => '/path/to/pb_definitions/some_folder/Unicorn_pb.rb'
|
||||
protobuf_root_directory => "/path/to/pb_definitions/"
|
||||
protobuf_version => 3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The codec can be used in input and output plugins.<br> When using the codec in the kafka input plugin please set the deserializer classes as shown above.<br> When using the codec in an output plugin:
|
||||
|
||||
* make sure to include all the desired fields in the protobuf definition, including timestamp. Remove fields that are not part of the protobuf definition from the event by using the mutate filter. Encoding will fail if the event has fields which are not in the protobuf definition.
|
||||
* the `@` symbol is currently not supported in field names when loading the protobuf definitions for encoding. Make sure to call the timestamp field `timestamp` instead of `@timestamp` in the protobuf file. Logstash event fields will be stripped of the leading `@` before conversion.
|
||||
* fields with a nil value will automatically be removed from the event. Empty fields will not be removed.
|
||||
* it is recommended to set the config option `pb3_encoder_autoconvert_types` to true. Otherwise any type mismatch between your data and the protobuf definition will cause an event to be lost. The auto typeconversion does not alter your data. It just tries to convert obviously identical data into the expected datatype, such as converting integers to floats where floats are expected, or "true" / "false" strings into booleans where booleans are expected.
|
||||
* When writing to Kafka: set the serializer class: `value_serializer => "org.apache.kafka.common.serialization.ByteArraySerializer"`
|
||||
|
||||
Encoder usage example (protobufg v3):
|
||||
|
||||
```ruby
|
||||
kafka
|
||||
{
|
||||
codec => protobuf
|
||||
{
|
||||
class_name => "Animals.Mammals.Unicorn"
|
||||
class_file => '/path/to/pb_definitions/some_folder/Unicorn_pb.rb'
|
||||
protobuf_root_directory => "/path/to/pb_definitions/"
|
||||
protobuf_version => 3
|
||||
}
|
||||
value_serializer => "org.apache.kafka.common.serialization.ByteArraySerializer"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Protobuf Codec Configuration Options [plugins-codecs-protobuf-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`class_name`](#plugins-codecs-protobuf-class_name) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`class_file`](#plugins-codecs-protobuf-class_file) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`protobuf_root_directory`](#plugins-codecs-protobuf-protobuf_root_directory) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`include_path`](#plugins-codecs-protobuf-include_path) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`protobuf_version`](#plugins-codecs-protobuf-protobuf_version) | [number](/reference/configuration-file-structure.md#number) | Yes |
|
||||
| [`stop_on_error`](#plugins-codecs-protobuf-stop_on_error) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`pb3_encoder_autoconvert_types`](#plugins-codecs-protobuf-pb3_encoder_autoconvert_types) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
|
||||
|
||||
|
||||
### `class_name` [plugins-codecs-protobuf-class_name]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Fully qualified name of the class to decode. Please note that the module delimiter is different depending on the protobuf version. For protobuf v2, use double colons:
|
||||
|
||||
```ruby
|
||||
class_name => "Animals::Mammals::Unicorn"
|
||||
```
|
||||
|
||||
For protobuf v3, use single dots:
|
||||
|
||||
```ruby
|
||||
class_name => "Animals.Mammals.Unicorn"
|
||||
```
|
||||
|
||||
For protobuf v3, you can copy the class name from the Descriptorpool registrations at the bottom of the generated protobuf ruby file. It contains lines like this:
|
||||
|
||||
```ruby
|
||||
Animals.Mammals.Unicorn = Google::Protobuf::DescriptorPool.generated_pool.lookup("Animals.Mammals.Unicorn").msgclass
|
||||
```
|
||||
|
||||
If your class references other definitions: you only have to add the name of the main class here.
|
||||
|
||||
|
||||
### `class_file` [plugins-codecs-protobuf-class_file]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Absolute path to the directory that contains all compiled protobuf files. If the protobuf definitions are spread across multiple folders, this needs to point to the folder containing all those folders.
|
||||
|
||||
|
||||
### `protobuf_root_directory` [plugins-codecs-protobuf-protobuf_root_directory]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Absolute path to the root directory that contains all referenced/used dependencies of the main class (`class_name`) or any of its dependencies. Must be used in combination with the `class_file` setting, and can not be used in combination with the legacy loading mechanism `include_path`.
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
pb3
|
||||
├── header
|
||||
│ └── header_pb.rb
|
||||
├── messageA_pb.rb
|
||||
```
|
||||
|
||||
In this case `messageA_pb.rb` has an embedded message from `header/header_pb.rb`. If `class_file` is set to `messageA_pb.rb`, and `class_name` to `MessageA`, `protobuf_root_directory` must be set to `/path/to/pb3`, which includes both definitions.
|
||||
|
||||
|
||||
### `include_path` [plugins-codecs-protobuf-include_path]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Legacy protobuf definition loading mechanism for backwards compatibility: List of absolute pathes to files with protobuf definitions. When using more than one file, make sure to arrange the files in reverse order of dependency so that each class is loaded before it is refered to by another.
|
||||
|
||||
Example: a class *Unicorn* referencing another protobuf class *Wings*
|
||||
|
||||
```ruby
|
||||
module Animal
|
||||
module Mammal
|
||||
class Unicorn
|
||||
set_fully_qualified_name "Animal.Mammal.Unicorn"
|
||||
optional ::Bodypart::Wings, :wings, 1
|
||||
optional :string, :name, 2
|
||||
...
|
||||
```
|
||||
|
||||
would be configured as
|
||||
|
||||
```ruby
|
||||
include_path => ['/path/to/pb_definitions/wings.pb.rb','/path/to/pb_definitions/unicorn.pb.rb']
|
||||
```
|
||||
|
||||
Please note that protobuf v2 files have the ending `.pb.rb` whereas files compiled for protobuf v3 end in `_pb.rb`.
|
||||
|
||||
Cannot be used together with `protobuf_root_directory` or `class_file`.
|
||||
|
||||
|
||||
### `protobuf_version` [plugins-codecs-protobuf-protobuf_version]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is 2
|
||||
|
||||
Protocol buffers version. Valid settings are 2, 3.
|
||||
|
||||
|
||||
### `stop_on_error` [plugins-codecs-protobuf-stop_on_error]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is false
|
||||
|
||||
Stop entire pipeline when encountering a non decodable message.
|
||||
|
||||
|
||||
### `pb3_encoder_autoconvert_types` [plugins-codecs-protobuf-pb3_encoder_autoconvert_types]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is true
|
||||
|
||||
Convert data types to match the protobuf definition (if possible). The protobuf encoder library is very strict with regards to data types. Example: an event has an integer field but the protobuf definition expects a float. This would lead to an exception and the event would be lost.
|
||||
|
||||
This feature tries to convert the datatypes to the expectations of the protobuf definitions, without modifying the data whatsoever. Examples of conversions it might attempt:
|
||||
|
||||
`"true"
|
||||
: string ⇒ true :: boolean`
|
||||
|
||||
`17
|
||||
: int ⇒ 17.0 :: float`
|
||||
|
||||
`12345
|
||||
: number ⇒ "12345" :: string`
|
||||
|
||||
Available only for protobuf version 3.
|
||||
|
||||
|
||||
### `pb3_set_oneof_metainfo` [plugins-codecs-protobuf-pb3_set_oneof_metainfo]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is false
|
||||
|
||||
Add meta information to `[@metadata][pb_oneof]` about which classes were chosen for [oneof](https://developers.google.com/protocol-buffers/docs/proto3#oneof) fields. A new field of name `[@metadata][pb_oneof][FOO]` will be added, where `FOO` is the name of the `oneof` field.
|
||||
|
||||
Example values: for the protobuf definition
|
||||
|
||||
```ruby
|
||||
oneof :horse_type do
|
||||
optional :unicorn, :message, 2, "UnicornType"
|
||||
optional :pegasus, :message, 3, "PegasusType"
|
||||
end
|
||||
```
|
||||
|
||||
the field `[@metadata][pb_oneof][horse_type]` will be set to either `pegasus` or `unicorn`. Available only for protobuf version 3.
|
||||
|
||||
|
||||
|
|
@ -1,42 +0,0 @@
|
|||
---
|
||||
navigation_title: "rubydebug"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-codecs-rubydebug.html
|
||||
---
|
||||
|
||||
# Rubydebug codec plugin [plugins-codecs-rubydebug]
|
||||
|
||||
|
||||
* Plugin version: v3.1.0
|
||||
* Released on: 2020-07-08
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-codec-rubydebug/blob/v3.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/codec-rubydebug-index.md).
|
||||
|
||||
## Getting help [_getting_help_197]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-codec-rubydebug). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_196]
|
||||
|
||||
The rubydebug codec will output your Logstash event data using the Ruby Amazing Print library.
|
||||
|
||||
|
||||
## Rubydebug Codec Configuration Options [plugins-codecs-rubydebug-options]
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`metadata`](#plugins-codecs-rubydebug-metadata) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
|
||||
|
||||
|
||||
### `metadata` [plugins-codecs-rubydebug-metadata]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Should the event’s metadata be included?
|
||||
|
||||
|
||||
|
|
@ -1,231 +0,0 @@
|
|||
---
|
||||
navigation_title: "age"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-age.html
|
||||
---
|
||||
|
||||
# Age filter plugin [plugins-filters-age]
|
||||
|
||||
|
||||
* Plugin version: v1.0.3
|
||||
* Released on: 2021-10-29
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-age/blob/v1.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-age-index.md).
|
||||
|
||||
## Installation [_installation_54]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-age`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_123]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-age). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_123]
|
||||
|
||||
A simple filter for calculating the age of an event.
|
||||
|
||||
This filter calculates the age of an event by subtracting the event timestamp from the current timestamp. You can use this plugin with the [`drop` filter plugin](/reference/plugins-filters-drop.md) to drop Logstash events that are older than some threshold.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
age {}
|
||||
if [@metadata][age] > 86400 {
|
||||
drop {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Age Filter Configuration Options [plugins-filters-age-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-age-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`target`](#plugins-filters-age-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-age-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `target` [plugins-filters-age-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"[@metadata][age]"`
|
||||
|
||||
Define the target field for the event age, in seconds.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-age-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-age-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-age-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-age-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-age-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-age-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-age-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-age-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-age-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
age {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
age {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-age-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
age {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
age {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-age-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-age-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 age filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
age {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-age-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-age-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
age {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
age {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-age-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
age {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
age {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,757 +0,0 @@
|
|||
---
|
||||
navigation_title: "aggregate"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html
|
||||
---
|
||||
|
||||
# Aggregate filter plugin [plugins-filters-aggregate]
|
||||
|
||||
|
||||
* Plugin version: v2.10.0
|
||||
* Released on: 2021-10-11
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-aggregate/blob/v2.10.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-aggregate-index.md).
|
||||
|
||||
## Getting help [_getting_help_124]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-aggregate). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [plugins-filters-aggregate-description]
|
||||
|
||||
The aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event.
|
||||
|
||||
You should be very careful to set Logstash filter workers to 1 (`-w 1` flag) for this filter to work correctly otherwise events may be processed out of sequence and unexpected results will occur.
|
||||
|
||||
|
||||
## Example #1 [plugins-filters-aggregate-example1]
|
||||
|
||||
* with these given logs :
|
||||
|
||||
```ruby
|
||||
INFO - 12345 - TASK_START - start
|
||||
INFO - 12345 - SQL - sqlQuery1 - 12
|
||||
INFO - 12345 - SQL - sqlQuery2 - 34
|
||||
INFO - 12345 - TASK_END - end
|
||||
```
|
||||
|
||||
* you can aggregate "sql duration" for the whole task with this configuration :
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
|
||||
}
|
||||
|
||||
if [logger] == "TASK_START" {
|
||||
aggregate {
|
||||
task_id => "%{taskid}"
|
||||
code => "map['sql_duration'] = 0"
|
||||
map_action => "create"
|
||||
}
|
||||
}
|
||||
|
||||
if [logger] == "SQL" {
|
||||
aggregate {
|
||||
task_id => "%{taskid}"
|
||||
code => "map['sql_duration'] += event.get('duration')"
|
||||
map_action => "update"
|
||||
}
|
||||
}
|
||||
|
||||
if [logger] == "TASK_END" {
|
||||
aggregate {
|
||||
task_id => "%{taskid}"
|
||||
code => "event.set('sql_duration', map['sql_duration'])"
|
||||
map_action => "update"
|
||||
end_of_task => true
|
||||
timeout => 120
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* the final event then looks like :
|
||||
|
||||
```ruby
|
||||
{
|
||||
"message" => "INFO - 12345 - TASK_END - end message",
|
||||
"sql_duration" => 46
|
||||
}
|
||||
```
|
||||
|
||||
the field `sql_duration` is added and contains the sum of all sql queries durations.
|
||||
|
||||
|
||||
## Example #2 : no start event [plugins-filters-aggregate-example2]
|
||||
|
||||
* If you have the same logs than example #1, but without a start log :
|
||||
|
||||
```ruby
|
||||
INFO - 12345 - SQL - sqlQuery1 - 12
|
||||
INFO - 12345 - SQL - sqlQuery2 - 34
|
||||
INFO - 12345 - TASK_END - end
|
||||
```
|
||||
|
||||
* you can also aggregate "sql duration" with a slightly different configuration :
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
|
||||
}
|
||||
|
||||
if [logger] == "SQL" {
|
||||
aggregate {
|
||||
task_id => "%{taskid}"
|
||||
code => "map['sql_duration'] ||= 0 ; map['sql_duration'] += event.get('duration')"
|
||||
}
|
||||
}
|
||||
|
||||
if [logger] == "TASK_END" {
|
||||
aggregate {
|
||||
task_id => "%{taskid}"
|
||||
code => "event.set('sql_duration', map['sql_duration'])"
|
||||
end_of_task => true
|
||||
timeout => 120
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* the final event is exactly the same than example #1
|
||||
* the key point is the "||=" ruby operator. It allows to initialize *sql_duration* map entry to 0 only if this map entry is not already initialized
|
||||
|
||||
|
||||
## Example #3 : no end event [plugins-filters-aggregate-example3]
|
||||
|
||||
Third use case: You have no specific end event.
|
||||
|
||||
A typical case is aggregating or tracking user behaviour. We can track a user by its ID through the events, however once the user stops interacting, the events stop coming in. There is no specific event indicating the end of the user’s interaction.
|
||||
|
||||
In this case, we can enable the option *push_map_as_event_on_timeout* to enable pushing the aggregation map as a new event when a timeout occurs. In addition, we can enable *timeout_code* to execute code on the populated timeout event. We can also add *timeout_task_id_field* so we can correlate the task_id, which in this case would be the user’s ID.
|
||||
|
||||
* Given these logs:
|
||||
|
||||
```ruby
|
||||
INFO - 12345 - Clicked One
|
||||
INFO - 12345 - Clicked Two
|
||||
INFO - 12345 - Clicked Three
|
||||
```
|
||||
|
||||
* You can aggregate the amount of clicks the user did like this:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:user_id} - %{GREEDYDATA:msg_text}" ]
|
||||
}
|
||||
|
||||
aggregate {
|
||||
task_id => "%{user_id}"
|
||||
code => "map['clicks'] ||= 0; map['clicks'] += 1;"
|
||||
push_map_as_event_on_timeout => true
|
||||
timeout_task_id_field => "user_id"
|
||||
timeout => 600 # 10 minutes timeout
|
||||
timeout_tags => ['_aggregatetimeout']
|
||||
timeout_code => "event.set('several_clicks', event.get('clicks') > 1)"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* After ten minutes, this will yield an event like:
|
||||
|
||||
```json
|
||||
{
|
||||
"user_id": "12345",
|
||||
"clicks": 3,
|
||||
"several_clicks": true,
|
||||
"tags": [
|
||||
"_aggregatetimeout"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Example #4 : no end event and tasks come one after the other [plugins-filters-aggregate-example4]
|
||||
|
||||
Fourth use case : like example #3, you have no specific end event, but also, tasks come one after the other.
|
||||
|
||||
That is to say : tasks are not interlaced. All task1 events come, then all task2 events come, …
|
||||
|
||||
In that case, you don’t want to wait task timeout to flush aggregation map.
|
||||
|
||||
* A typical case is aggregating results from jdbc input plugin.
|
||||
* Given that you have this SQL query : `SELECT country_name, town_name FROM town`
|
||||
* Using jdbc input plugin, you get these 3 events from :
|
||||
|
||||
```json
|
||||
{ "country_name": "France", "town_name": "Paris" }
|
||||
{ "country_name": "France", "town_name": "Marseille" }
|
||||
{ "country_name": "USA", "town_name": "New-York" }
|
||||
```
|
||||
|
||||
* And you would like these 2 result events to push them into elasticsearch :
|
||||
|
||||
```json
|
||||
{ "country_name": "France", "towns": [ {"town_name": "Paris"}, {"town_name": "Marseille"} ] }
|
||||
{ "country_name": "USA", "towns": [ {"town_name": "New-York"} ] }
|
||||
```
|
||||
|
||||
* You can do that using `push_previous_map_as_event` aggregate plugin option :
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
aggregate {
|
||||
task_id => "%{country_name}"
|
||||
code => "
|
||||
map['country_name'] ||= event.get('country_name')
|
||||
map['towns'] ||= []
|
||||
map['towns'] << {'town_name' => event.get('town_name')}
|
||||
event.cancel()
|
||||
"
|
||||
push_previous_map_as_event => true
|
||||
timeout => 3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* The key point is that each time aggregate plugin detects a new `country_name`, it pushes previous aggregate map as a new Logstash event, and then creates a new empty map for the next country
|
||||
* When 3s timeout comes, the last aggregate map is pushed as a new event
|
||||
* Initial events (which are not aggregated) are dropped because useless (thanks to `event.cancel()`)
|
||||
* Last point: if a field is not fulfilled for every event (say "town_postcode" field), the `||=` operator will let you to push into aggregate map, the first "not null" value. Example: `map['town_postcode'] ||= event.get('town_postcode')`
|
||||
|
||||
|
||||
## Example #5 : no end event and push events as soon as possible [plugins-filters-aggregate-example5]
|
||||
|
||||
Fifth use case: like example #3, there is no end event.
|
||||
|
||||
Events keep coming for an indefinite time and you want to push the aggregation map as soon as possible after the last user interaction without waiting for the `timeout`.
|
||||
|
||||
This allows to have the aggregated events pushed closer to real time.
|
||||
|
||||
A typical case is aggregating or tracking user behaviour.
|
||||
|
||||
We can track a user by its ID through the events, however once the user stops interacting, the events stop coming in.
|
||||
|
||||
There is no specific event indicating the end of the user’s interaction.
|
||||
|
||||
The user interaction will be considered as ended when no events for the specified user (task_id) arrive after the specified inactivity_timeout`.
|
||||
|
||||
If the user continues interacting for longer than `timeout` seconds (since first event), the aggregation map will still be deleted and pushed as a new event when timeout occurs.
|
||||
|
||||
The difference with example #3 is that the events will be pushed as soon as the user stops interacting for `inactivity_timeout` seconds instead of waiting for the end of `timeout` seconds since first event.
|
||||
|
||||
In this case, we can enable the option *push_map_as_event_on_timeout* to enable pushing the aggregation map as a new event when inactivity timeout occurs.
|
||||
|
||||
In addition, we can enable *timeout_code* to execute code on the populated timeout event.
|
||||
|
||||
We can also add *timeout_task_id_field* so we can correlate the task_id, which in this case would be the user’s ID.
|
||||
|
||||
* Given these logs:
|
||||
|
||||
```ruby
|
||||
INFO - 12345 - Clicked One
|
||||
INFO - 12345 - Clicked Two
|
||||
INFO - 12345 - Clicked Three
|
||||
```
|
||||
|
||||
* You can aggregate the amount of clicks the user did like this:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:user_id} - %{GREEDYDATA:msg_text}" ]
|
||||
}
|
||||
aggregate {
|
||||
task_id => "%{user_id}"
|
||||
code => "map['clicks'] ||= 0; map['clicks'] += 1;"
|
||||
push_map_as_event_on_timeout => true
|
||||
timeout_task_id_field => "user_id"
|
||||
timeout => 3600 # 1 hour timeout, user activity will be considered finished one hour after the first event, even if events keep coming
|
||||
inactivity_timeout => 300 # 5 minutes timeout, user activity will be considered finished if no new events arrive 5 minutes after the last event
|
||||
timeout_tags => ['_aggregatetimeout']
|
||||
timeout_code => "event.set('several_clicks', event.get('clicks') > 1)"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* After five minutes of inactivity or one hour since first event, this will yield an event like:
|
||||
|
||||
```json
|
||||
{
|
||||
"user_id": "12345",
|
||||
"clicks": 3,
|
||||
"several_clicks": true,
|
||||
"tags": [
|
||||
"_aggregatetimeout"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## How it works [plugins-filters-aggregate-howitworks]
|
||||
|
||||
* the filter needs a "task_id" to correlate events (log lines) of a same task
|
||||
* at the task beginning, filter creates a map, attached to task_id
|
||||
* for each event, you can execute code using *event* and *map* (for instance, copy an event field to map)
|
||||
* in the final event, you can execute a last code (for instance, add map data to final event)
|
||||
* after the final event, the map attached to task is deleted (thanks to `end_of_task => true`)
|
||||
* an aggregate map is tied to one task_id value which is tied to one task_id pattern. So if you have 2 filters with different task_id patterns, even if you have same task_id value, they won’t share the same aggregate map.
|
||||
* in one filter configuration, it is recommended to define a timeout option to protect the feature against unterminated tasks. It tells the filter to delete expired maps
|
||||
* if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
|
||||
* all timeout options have to be defined in only one aggregate filter per task_id pattern (per pipeline). Timeout options are : timeout, inactivity_timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_timestamp_field, timeout_task_id_field, timeout_tags
|
||||
* if `code` execution raises an exception, the error is logged and event is tagged *_aggregateexception*
|
||||
|
||||
|
||||
## Use Cases [plugins-filters-aggregate-usecases]
|
||||
|
||||
* extract some cool metrics from task logs and push them into task final log event (like in example #1 and #2)
|
||||
* extract error information in any task log line, and push it in final task event (to get a final event with all error information if any)
|
||||
* extract all back-end calls as a list, and push this list in final task event (to get a task profile)
|
||||
* extract all http headers logged in several lines to push this list in final task event (complete http request info)
|
||||
* for every back-end call, collect call details available on several lines, analyse it and finally tag final back-end call log line (error, timeout, business-warning, …)
|
||||
* Finally, task id can be any correlation id matching your need : it can be a session id, a file path, …
|
||||
|
||||
|
||||
## Aggregate Filter Configuration Options [plugins-filters-aggregate-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-aggregate-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`aggregate_maps_path`](#plugins-filters-aggregate-aggregate_maps_path) | [string](/reference/configuration-file-structure.md#string), a valid filesystem path | No |
|
||||
| [`code`](#plugins-filters-aggregate-code) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`end_of_task`](#plugins-filters-aggregate-end_of_task) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`inactivity_timeout`](#plugins-filters-aggregate-inactivity_timeout) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`map_action`](#plugins-filters-aggregate-map_action) | [string](/reference/configuration-file-structure.md#string), one of `["create", "update", "create_or_update"]` | No |
|
||||
| [`push_map_as_event_on_timeout`](#plugins-filters-aggregate-push_map_as_event_on_timeout) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`push_previous_map_as_event`](#plugins-filters-aggregate-push_previous_map_as_event) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`task_id`](#plugins-filters-aggregate-task_id) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`timeout`](#plugins-filters-aggregate-timeout) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`timeout_code`](#plugins-filters-aggregate-timeout_code) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`timeout_tags`](#plugins-filters-aggregate-timeout_tags) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`timeout_task_id_field`](#plugins-filters-aggregate-timeout_task_id_field) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`timeout_timestamp_field`](#plugins-filters-aggregate-timeout_timestamp_field) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-aggregate-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `aggregate_maps_path` [plugins-filters-aggregate-aggregate_maps_path]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The path to file where aggregate maps are stored when Logstash stops and are loaded from when Logstash starts.
|
||||
|
||||
If not defined, aggregate maps will not be stored at Logstash stop and will be lost. Must be defined in only one aggregate filter per pipeline (as aggregate maps are shared at pipeline level).
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
aggregate {
|
||||
aggregate_maps_path => "/path/to/.aggregate_maps"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `code` [plugins-filters-aggregate-code]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The code to execute to update aggregated map, using current event.
|
||||
|
||||
Or on the contrary, the code to execute to update event, using aggregated map.
|
||||
|
||||
Available variables are:
|
||||
|
||||
`event`: current Logstash event
|
||||
|
||||
`map`: aggregated map associated to `task_id`, containing key/value pairs. Data structure is a ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.html)
|
||||
|
||||
`map_meta`: meta informations associated to aggregate map. It allows to set a custom `timeout` or `inactivity_timeout`. It allows also to get `creation_timestamp`, `lastevent_timestamp` and `task_id`.
|
||||
|
||||
`new_event_block`: block used to emit new Logstash events. See the second example on how to use it.
|
||||
|
||||
When option push_map_as_event_on_timeout=true, if you set `map_meta.timeout=0` in `code` block, then aggregated map is immediately pushed as a new event.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
aggregate {
|
||||
code => "map['sql_duration'] += event.get('duration')"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To create additional events during the code execution, to be emitted immediately, you can use `new_event_block.call(event)` function, like in the following example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
aggregate {
|
||||
code => "
|
||||
data = {:my_sql_duration => map['sql_duration']}
|
||||
generated_event = LogStash::Event.new(data)
|
||||
generated_event.set('my_other_field', 34)
|
||||
new_event_block.call(generated_event)
|
||||
"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The parameter of the function `new_event_block.call` must be of type `LogStash::Event`. To create such an object, the constructor of the same class can be used: `LogStash::Event.new()`. `LogStash::Event.new()` can receive a parameter of type ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.html) to initialize the new event fields.
|
||||
|
||||
|
||||
### `end_of_task` [plugins-filters-aggregate-end_of_task]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Tell the filter that task is ended, and therefore, to delete aggregate map after code execution.
|
||||
|
||||
|
||||
### `inactivity_timeout` [plugins-filters-aggregate-inactivity_timeout]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The amount of seconds (since the last event) after which a task is considered as expired.
|
||||
|
||||
When timeout occurs for a task, its aggregate map is evicted.
|
||||
|
||||
If *push_map_as_event_on_timeout* or *push_previous_map_as_event* is set to true, the task aggregation map is pushed as a new Logstash event.
|
||||
|
||||
`inactivity_timeout` can be defined for each "task_id" pattern.
|
||||
|
||||
`inactivity_timeout` must be lower than `timeout`.
|
||||
|
||||
|
||||
### `map_action` [plugins-filters-aggregate-map_action]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"create_or_update"`
|
||||
|
||||
Tell the filter what to do with aggregate map.
|
||||
|
||||
`"create"`: create the map, and execute the code only if map wasn’t created before
|
||||
|
||||
`"update"`: doesn’t create the map, and execute the code only if map was created before
|
||||
|
||||
`"create_or_update"`: create the map if it wasn’t created before, execute the code in all cases
|
||||
|
||||
|
||||
### `push_map_as_event_on_timeout` [plugins-filters-aggregate-push_map_as_event_on_timeout]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
When this option is enabled, each time a task timeout is detected, it pushes task aggregation map as a new Logstash event. This enables to detect and process task timeouts in Logstash, but also to manage tasks that have no explicit end event.
|
||||
|
||||
|
||||
### `push_previous_map_as_event` [plugins-filters-aggregate-push_previous_map_as_event]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
When this option is enabled, each time aggregate plugin detects a new task id, it pushes previous aggregate map as a new Logstash event, and then creates a new empty map for the next task.
|
||||
|
||||
::::{warning}
|
||||
this option works fine only if tasks come one after the other. It means : all task1 events, then all task2 events, etc…
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `task_id` [plugins-filters-aggregate-task_id]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The expression defining task ID to correlate logs.
|
||||
|
||||
This value must uniquely identify the task.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
aggregate {
|
||||
task_id => "%{type}%{my_task_id}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `timeout` [plugins-filters-aggregate-timeout]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `1800`
|
||||
|
||||
The amount of seconds (since the first event) after which a task is considered as expired.
|
||||
|
||||
When timeout occurs for a task, its aggregate map is evicted.
|
||||
|
||||
If *push_map_as_event_on_timeout* or *push_previous_map_as_event* is set to true, the task aggregation map is pushed as a new Logstash event.
|
||||
|
||||
Timeout can be defined for each "task_id" pattern.
|
||||
|
||||
|
||||
### `timeout_code` [plugins-filters-aggregate-timeout_code]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The code to execute to complete timeout generated event, when `'push_map_as_event_on_timeout'` or `'push_previous_map_as_event'` is set to true. The code block will have access to the newly generated timeout event that is pre-populated with the aggregation map.
|
||||
|
||||
If `'timeout_task_id_field'` is set, the event is also populated with the task_id value
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
aggregate {
|
||||
timeout_code => "event.set('state', 'timeout')"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `timeout_tags` [plugins-filters-aggregate-timeout_tags]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
Defines tags to add when a timeout event is generated and yield
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
aggregate {
|
||||
timeout_tags => ["aggregate_timeout"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `timeout_task_id_field` [plugins-filters-aggregate-timeout_task_id_field]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
This option indicates the timeout generated event’s field where the current "task_id" value will be set. This can help to correlate which tasks have been timed out.
|
||||
|
||||
By default, if this option is not set, task id value won’t be set into timeout generated event.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
aggregate {
|
||||
timeout_task_id_field => "task_id"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `timeout_timestamp_field` [plugins-filters-aggregate-timeout_timestamp_field]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
By default, timeout is computed using system time, where Logstash is running.
|
||||
|
||||
When this option is set, timeout is computed using event timestamp field indicated in this option. It means that when a first event arrives on aggregate filter and induces a map creation, map creation time will be equal to this event timestamp. Then, each time a new event arrives on aggregate filter, event timestamp is compared to map creation time to check if timeout happened.
|
||||
|
||||
This option is particularly useful when processing old logs with option `push_map_as_event_on_timeout => true`. It lets to generate aggregated events based on timeout on old logs, where system time is inappropriate.
|
||||
|
||||
Warning : so that this option works fine, it must be set on first aggregate filter.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
aggregate {
|
||||
timeout_timestamp_field => "@timestamp"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-aggregate-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-aggregate-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-aggregate-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-aggregate-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-aggregate-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-aggregate-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-aggregate-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-aggregate-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-aggregate-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
aggregate {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
aggregate {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-aggregate-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
aggregate {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
aggregate {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-aggregate-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-aggregate-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 aggregate filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
aggregate {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-aggregate-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-aggregate-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
aggregate {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
aggregate {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-aggregate-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
aggregate {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
aggregate {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,283 +0,0 @@
|
|||
---
|
||||
navigation_title: "alter"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-alter.html
|
||||
---
|
||||
|
||||
# Alter filter plugin [plugins-filters-alter]
|
||||
|
||||
|
||||
* Plugin version: v3.0.3
|
||||
* Released on: 2017-11-07
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-alter/blob/v3.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-alter-index.md).
|
||||
|
||||
## Installation [_installation_55]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-alter`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_125]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-alter). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_124]
|
||||
|
||||
The alter filter allows you to do general alterations to fields that are not included in the normal mutate filter.
|
||||
|
||||
::::{note}
|
||||
The functionality provided by this plugin is likely to be merged into the *mutate* filter in future versions.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Alter Filter Configuration Options [plugins-filters-alter-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-alter-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`coalesce`](#plugins-filters-alter-coalesce) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`condrewrite`](#plugins-filters-alter-condrewrite) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`condrewriteother`](#plugins-filters-alter-condrewriteother) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-alter-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `coalesce` [plugins-filters-alter-coalesce]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Sets the value of field_name to the first nonnull expression among its arguments.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
alter {
|
||||
coalesce => [
|
||||
"field_name", "value1", "value2", "value3", ...
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `condrewrite` [plugins-filters-alter-condrewrite]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Change the content of the field to the specified value if the actual content is equal to the expected one.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
alter {
|
||||
condrewrite => [
|
||||
"field_name", "expected_value", "new_value",
|
||||
"field_name2", "expected_value2", "new_value2",
|
||||
....
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `condrewriteother` [plugins-filters-alter-condrewriteother]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Change the content of the field to the specified value if the content of another field is equal to the expected one.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
alter {
|
||||
condrewriteother => [
|
||||
"field_name", "expected_value", "field_name_to_change", "value",
|
||||
"field_name2", "expected_value2", "field_name_to_change2", "value2",
|
||||
....
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-alter-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-alter-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-alter-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-alter-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-alter-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-alter-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-alter-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-alter-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-alter-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
alter {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
alter {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-alter-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
alter {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
alter {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-alter-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-alter-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 alter filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
alter {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-alter-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-alter-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
alter {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
alter {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-alter-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
alter {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
alter {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,284 +0,0 @@
|
|||
---
|
||||
navigation_title: "bytes"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-bytes.html
|
||||
---
|
||||
|
||||
# Bytes filter plugin [plugins-filters-bytes]
|
||||
|
||||
|
||||
* Plugin version: v1.0.3
|
||||
* Released on: 2020-08-18
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-bytes/blob/v1.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-bytes-index.md).
|
||||
|
||||
## Installation [_installation_56]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-bytes`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_126]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-bytes). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_125]
|
||||
|
||||
Parse string representations of computer storage sizes, such as "123 MB" or "5.6gb", into their numeric value in bytes.
|
||||
|
||||
This plugin understands:
|
||||
|
||||
* bytes ("B")
|
||||
* kilobytes ("KB" or "kB")
|
||||
* megabytes ("MB", "mb", or "mB")
|
||||
* gigabytes ("GB", "gb", or "gB")
|
||||
* terabytes ("TB", "tb", or "tB")
|
||||
* petabytes ("PB", "pb", or "pB")
|
||||
|
||||
|
||||
## Examples [plugins-filters-bytes-examples]
|
||||
|
||||
| Input string | Conversion method | Numeric value in bytes |
|
||||
| --- | --- | --- |
|
||||
| 40 | `binary` or `metric` | 40 |
|
||||
| 40B | `binary` or `metric` | 40 |
|
||||
| 40 B | `binary` or `metric` | 40 |
|
||||
| 40KB | `binary` | 40960 |
|
||||
| 40kB | `binary` | 40960 |
|
||||
| 40KB | `metric` | 40000 |
|
||||
| 40.5KB | `binary` | 41472 |
|
||||
| 40kb | `binary` | 5120 |
|
||||
| 40Kb | `binary` | 5120 |
|
||||
| 10 MB | `binary` | 10485760 |
|
||||
| 10 mB | `binary` | 10485760 |
|
||||
| 10 mb | `binary` | 10485760 |
|
||||
| 10 Mb | `binary` | 1310720 |
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
bytes {
|
||||
source => "my_bytes_string_field"
|
||||
target => "my_bytes_numeric_field"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Bytes Filter Configuration Options [plugins-filters-bytes-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-bytes-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`source`](#plugins-filters-bytes-source) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target`](#plugins-filters-bytes-target) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`conversion_method`](#plugins-filters-bytes-conversion_method) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`source`](#plugins-filters-bytes-decimal_separator) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-bytes-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `source` [plugins-filters-bytes-source]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `message`
|
||||
|
||||
Name of the source field that contains the storage size
|
||||
|
||||
|
||||
### `target` [plugins-filters-bytes-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
|
||||
Name of the target field that will contain the storage size in bytes
|
||||
|
||||
|
||||
### `conversion_method` [plugins-filters-bytes-conversion_method]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Value can be any of: `binary`, `metric`
|
||||
* Default value is `binary`
|
||||
|
||||
Which conversion method to use when converting to bytes. `binary` uses `1K = 1024B`. `metric` uses `1K = 1000B`.
|
||||
|
||||
|
||||
### `source` [plugins-filters-bytes-decimal_separator]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `.`
|
||||
|
||||
Separator, if any, used as the decimal. This value is only used if the plugin cannot guess the decimal separator by looking at the string in the `source` field.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-bytes-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-bytes-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-bytes-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-bytes-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-bytes-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-bytes-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-bytes-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-bytes-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-bytes-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
bytes {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
bytes {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-bytes-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
bytes {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
bytes {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-bytes-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-bytes-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 bytes filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
bytes {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-bytes-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-bytes-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
bytes {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
bytes {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-bytes-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
bytes {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
bytes {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,277 +0,0 @@
|
|||
---
|
||||
navigation_title: "cidr"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-cidr.html
|
||||
---
|
||||
|
||||
# Cidr filter plugin [plugins-filters-cidr]
|
||||
|
||||
|
||||
* Plugin version: v3.1.3
|
||||
* Released on: 2019-09-18
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-cidr/blob/v3.1.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-cidr-index.md).
|
||||
|
||||
## Getting help [_getting_help_127]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-cidr). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_126]
|
||||
|
||||
The CIDR filter is for checking IP addresses in events against a list of network blocks that might contain it. Multiple addresses can be checked against multiple networks, any match succeeds. Upon success additional tags and/or fields can be added to the event.
|
||||
|
||||
|
||||
## Cidr Filter Configuration Options [plugins-filters-cidr-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-cidr-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`address`](#plugins-filters-cidr-address) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`network`](#plugins-filters-cidr-network) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`network_path`](#plugins-filters-cidr-network_path) | a valid filesystem path | No |
|
||||
| [`refresh_interval`](#plugins-filters-cidr-refresh_interval) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`separator`](#plugins-filters-cidr-separator) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-cidr-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `address` [plugins-filters-cidr-address]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
The IP address(es) to check with. Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
cidr {
|
||||
add_tag => [ "testnet" ]
|
||||
address => [ "%{src_ip}", "%{dst_ip}" ]
|
||||
network => [ "192.0.2.0/24" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `network` [plugins-filters-cidr-network]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
The IP network(s) to check against. Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
cidr {
|
||||
add_tag => [ "linklocal" ]
|
||||
address => [ "%{clientip}" ]
|
||||
network => [ "169.254.0.0/16", "fe80::/64" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `network_path` [plugins-filters-cidr-network_path]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The full path of the external file containing the networks the filter should check with. Networks are separated by a separator character defined in `separator`.
|
||||
|
||||
```ruby
|
||||
192.168.1.0/24
|
||||
192.167.0.0/16
|
||||
NOTE: It is an error to specify both `network` and `network_path`.
|
||||
```
|
||||
|
||||
|
||||
### `refresh_interval` [plugins-filters-cidr-refresh_interval]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `600`
|
||||
|
||||
When using an external file, this setting will indicate how frequently (in seconds) Logstash will check the file for updates.
|
||||
|
||||
|
||||
### `separator` [plugins-filters-cidr-separator]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `\n`
|
||||
|
||||
Separator character used for parsing networks from the external file specified by `network_path`. Defaults to newline `\n` character.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-cidr-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-cidr-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-cidr-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-cidr-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-cidr-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-cidr-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-cidr-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-cidr-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-cidr-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
cidr {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
cidr {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-cidr-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
cidr {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
cidr {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-cidr-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-cidr-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 cidr filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
cidr {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-cidr-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-cidr-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
cidr {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
cidr {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-cidr-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
cidr {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
cidr {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,387 +0,0 @@
|
|||
---
|
||||
navigation_title: "cipher"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-cipher.html
|
||||
---
|
||||
|
||||
# Cipher filter plugin [plugins-filters-cipher]
|
||||
|
||||
|
||||
* Plugin version: v4.0.3
|
||||
* Released on: 2022-06-21
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-cipher/blob/v4.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-cipher-index.md).
|
||||
|
||||
## Installation [_installation_57]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-cipher`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_128]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-cipher). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_127]
|
||||
|
||||
This filter parses a source and apply a cipher or decipher before storing it in the target.
|
||||
|
||||
::::{note}
|
||||
Prior to version 4.0.1, this plugin was not thread-safe and could not safely be used with multiple pipeline workers.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Cipher Filter Configuration Options [plugins-filters-cipher-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-cipher-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`algorithm`](#plugins-filters-cipher-algorithm) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`base64`](#plugins-filters-cipher-base64) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`cipher_padding`](#plugins-filters-cipher-cipher_padding) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`iv_random_length`](#plugins-filters-cipher-iv_random_length) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`key`](#plugins-filters-cipher-key) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`key_pad`](#plugins-filters-cipher-key_pad) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`key_size`](#plugins-filters-cipher-key_size) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`max_cipher_reuse`](#plugins-filters-cipher-max_cipher_reuse) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`mode`](#plugins-filters-cipher-mode) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`source`](#plugins-filters-cipher-source) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target`](#plugins-filters-cipher-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-cipher-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `algorithm` [plugins-filters-cipher-algorithm]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The cipher algorithm to use for encryption and decryption operations.
|
||||
|
||||
A list of supported algorithms depends on the versions of Logstash, JRuby, and Java this plugin is running in, but can be obtained by running:
|
||||
|
||||
```sh
|
||||
cd $LOGSTASH_HOME # <-- your Logstash distribution root
|
||||
bin/ruby -ropenssl -e 'puts OpenSSL::Cipher.ciphers'
|
||||
```
|
||||
|
||||
|
||||
### `base64` [plugins-filters-cipher-base64]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
* Unless this option is disabled:
|
||||
|
||||
* When [`mode => encrypt`](#plugins-filters-cipher-mode), the source ciphertext will be `base64`-decoded before it is deciphered.
|
||||
* When [`mode => decrypt`](#plugins-filters-cipher-mode), the result ciphertext will be `base64`-encoded before it is stored.
|
||||
|
||||
|
||||
|
||||
### `cipher_padding` [plugins-filters-cipher-cipher_padding]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
|
||||
* `0`: means `false`
|
||||
* `1`: means `true`
|
||||
|
||||
* There is no default value for this setting.
|
||||
|
||||
Enables or disables padding in encryption operations.
|
||||
|
||||
In encryption operations with block-ciphers, the input plaintext must be an *exact* multiple of the cipher’s block-size unless padding is enabled.
|
||||
|
||||
Disabling padding by setting this value to `0` will cause this plugin to fail to encrypt any input plaintext that doesn’t strictly adhere to the [`algorithm`](#plugins-filters-cipher-algorithm)'s block size requirements.
|
||||
|
||||
```ruby
|
||||
filter { cipher { cipher_padding => 0 }}
|
||||
```
|
||||
|
||||
|
||||
### `iv_random_length` [plugins-filters-cipher-iv_random_length]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* There is no default value for this setting.
|
||||
|
||||
In encryption operations, this plugin generates a random Initialization Vector (IV) per encryption operation. This is a standard best-practice to ensure that the resulting ciphertexts cannot be compared to infer equivalence of the source plaintext. This unique IV is then *prepended* to the resulting ciphertext before it is stored, ensuring it is available to any process that needs to decrypt it.
|
||||
|
||||
In decryption operations, the IV is assumed to have been prepended to the ciphertext, so this plugin needs to know the length of the IV in order to split the input appropriately.
|
||||
|
||||
The size of the IV is generally dependent on which [`algorithm`](#plugins-filters-cipher-algorithm) is used. AES Algorithms generally use a 16-byte IV:
|
||||
|
||||
```ruby
|
||||
filter { cipher { iv_random_length => 16 }}
|
||||
```
|
||||
|
||||
|
||||
### `key` [plugins-filters-cipher-key]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The key to use for encryption and decryption operations.
|
||||
|
||||
::::{note}
|
||||
Please read the [UnlimitedStrengthCrypto topic](https://github.com/jruby/jruby/wiki/UnlimitedStrengthCrypto) in the [jruby](https://github.com/jruby/jruby) github repo if you see a runtime error that resembles:
|
||||
|
||||
`java.security.InvalidKeyException: Illegal key size: possibly you need to install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for your JRE`
|
||||
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `key_pad` [plugins-filters-cipher-key_pad]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"\u0000"`
|
||||
|
||||
The character used to pad the key to the required [`key_size`](#plugins-filters-cipher-key_size).
|
||||
|
||||
|
||||
### `key_size` [plugins-filters-cipher-key_size]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `16`
|
||||
|
||||
The cipher’s required key size, which depends on which [`algorithm`](#plugins-filters-cipher-algorithm) you are using. If a [`key`](#plugins-filters-cipher-key) is specified with a shorter value, it will be padded with [`key_pad`](#plugins-filters-cipher-key_pad).
|
||||
|
||||
Example, for AES-128, we must have 16 char long key. AES-256 = 32 chars
|
||||
|
||||
```ruby
|
||||
filter { cipher { key_size => 16 }
|
||||
```
|
||||
|
||||
|
||||
### `max_cipher_reuse` [plugins-filters-cipher-max_cipher_reuse]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `1`
|
||||
|
||||
If this value is set, the internal Cipher instance will be re-used up to `max_cipher_reuse` times before it is re-created from scratch. This is an option for efficiency where lots of data is being encrypted and decrypted using this filter. This lets the filter avoid creating new Cipher instances over and over for each encrypt/decrypt operation.
|
||||
|
||||
This is optional, the default is no re-use of the Cipher instance and max_cipher_reuse = 1 by default
|
||||
|
||||
```ruby
|
||||
filter { cipher { max_cipher_reuse => 1000 }}
|
||||
```
|
||||
|
||||
|
||||
### `mode` [plugins-filters-cipher-mode]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
|
||||
* `encrypt`: encrypts a plaintext value into IV + ciphertext
|
||||
* `decrypt`: decrypts an IV + ciphertext value into plaintext
|
||||
|
||||
* There is no default value for this setting.
|
||||
|
||||
|
||||
### `source` [plugins-filters-cipher-source]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"message"`
|
||||
|
||||
The name of the source field.
|
||||
|
||||
* When [`mode => encrypt`](#plugins-filters-cipher-mode), the `source` should be a field containing plaintext
|
||||
* When [`mode => decrypt`](#plugins-filters-cipher-mode), the `source` should be a field containing IV + ciphertext
|
||||
|
||||
Example, to use the `message` field (default) :
|
||||
|
||||
```ruby
|
||||
filter { cipher { source => "message" } }
|
||||
```
|
||||
|
||||
|
||||
### `target` [plugins-filters-cipher-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"message"`
|
||||
|
||||
The name of the target field to put the result:
|
||||
|
||||
* When [`mode => encrypt`](#plugins-filters-cipher-mode), the IV + ciphertext result will be stored in the `target` field
|
||||
* When [`mode => decrypt`](#plugins-filters-cipher-mode), the plaintext result will be stored in the `target` field
|
||||
|
||||
Example, to place the result into crypt:
|
||||
|
||||
```ruby
|
||||
filter { cipher { target => "crypt" } }
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-cipher-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-cipher-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-cipher-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-cipher-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-cipher-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-cipher-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-cipher-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-cipher-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-cipher-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
cipher {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
cipher {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-cipher-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
cipher {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
cipher {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-cipher-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-cipher-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 cipher filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
cipher {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-cipher-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-cipher-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
cipher {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
cipher {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-cipher-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
cipher {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
cipher {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,317 +0,0 @@
|
|||
---
|
||||
navigation_title: "clone"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-clone.html
|
||||
---
|
||||
|
||||
# Clone filter plugin [plugins-filters-clone]
|
||||
|
||||
|
||||
* Plugin version: v4.2.0
|
||||
* Released on: 2021-11-10
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-clone/blob/v4.2.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-clone-index.md).
|
||||
|
||||
## Getting help [_getting_help_129]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-clone). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_128]
|
||||
|
||||
The clone filter is for duplicating events. A clone will be created for each type in the clone list. The original event is left unchanged and a `type` field is added to the clone. Created events are inserted into the pipeline as normal events and will be processed by the remaining pipeline configuration starting from the filter that generated them (i.e. this plugin).
|
||||
|
||||
|
||||
## Event Metadata and the Elastic Common Schema (ECS) [_event_metadata_and_the_elastic_common_schema_ecs]
|
||||
|
||||
This plugin adds a tag to a cloned event. By default, the tag is stored in the `type` field. When ECS is enabled, the tag is stored in the `tags` array field.
|
||||
|
||||
Here’s how ECS compatibility mode affects output.
|
||||
|
||||
| ECS disabled | ECS `v1`, `v8` | Availability | Description |
|
||||
| --- | --- | --- | --- |
|
||||
| type | tags | *Always* | *a tag of cloned event* |
|
||||
|
||||
|
||||
## Clone Filter Configuration Options [plugins-filters-clone-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-clone-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`clones`](#plugins-filters-clone-clones) | [array](/reference/configuration-file-structure.md#array) | Yes |
|
||||
| [`ecs_compatibility`](#plugins-filters-clone-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-clone-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `clones` [plugins-filters-clone-clones]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
* a new clone will be created with a `type` of the given value in this list when ECS is disabled
|
||||
* a new clone will be created with a `tags` of the given value in this list when ECS is enabled
|
||||
|
||||
Note: setting an empty array will not create any clones. A warning message is logged.
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-filters-clone-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: does not use ECS-compatible field names
|
||||
* `v1`, `v8`: uses fields that are compatible with Elastic Common Schema
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`.
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). The value of this setting affects the behavior of the [`clones`](#plugins-filters-clone-clones)
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
clone {
|
||||
clones => ["sun", "moon"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
ECS disabled
|
||||
|
||||
```text
|
||||
{
|
||||
"@version" => "1",
|
||||
"sequence" => 0,
|
||||
"message" => "Hello World!",
|
||||
"@timestamp" => 2021-03-24T11:20:36.226Z,
|
||||
"host" => "example.com"
|
||||
}
|
||||
{
|
||||
"@version" => "1",
|
||||
"sequence" => 0,
|
||||
"message" => "Hello World!",
|
||||
"@timestamp" => 2021-03-24T11:20:36.226Z,
|
||||
"type" => "sun",
|
||||
"host" => "example.com"
|
||||
}
|
||||
{
|
||||
"@version" => "1",
|
||||
"sequence" => 0,
|
||||
"message" => "Hello World!",
|
||||
"@timestamp" => 2021-03-24T11:20:36.226Z,
|
||||
"type" => "moon",
|
||||
"host" => "example.com"
|
||||
}
|
||||
```
|
||||
|
||||
ECS enabled
|
||||
|
||||
```text
|
||||
{
|
||||
"sequence" => 0,
|
||||
"@timestamp" => 2021-03-23T20:25:10.042Z,
|
||||
"message" => "Hello World!",
|
||||
"@version" => "1",
|
||||
"host" => "example.com"
|
||||
}
|
||||
{
|
||||
"tags" => [
|
||||
[0] "sun"
|
||||
],
|
||||
"sequence" => 0,
|
||||
"@timestamp" => 2021-03-23T20:25:10.042Z,
|
||||
"message" => "Hello World!",
|
||||
"@version" => "1",
|
||||
"host" => "example.com"
|
||||
}
|
||||
{
|
||||
"tags" => [
|
||||
[0] "moon"
|
||||
],
|
||||
"sequence" => 0,
|
||||
"@timestamp" => 2021-03-23T20:25:10.042Z,
|
||||
"message" => "Hello World!",
|
||||
"@version" => "1",
|
||||
"host" => "example.com"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-clone-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-clone-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-clone-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-clone-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-clone-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-clone-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-clone-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-clone-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-clone-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
clone {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
clone {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-clone-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
clone {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
clone {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-clone-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-clone-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 clone filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
clone {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-clone-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-clone-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
clone {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
clone {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-clone-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
clone {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
clone {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,345 +0,0 @@
|
|||
---
|
||||
navigation_title: "csv"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-csv.html
|
||||
---
|
||||
|
||||
# Csv filter plugin [plugins-filters-csv]
|
||||
|
||||
|
||||
* Plugin version: v3.1.1
|
||||
* Released on: 2021-06-08
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-csv/blob/v3.1.1/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-csv-index.md).
|
||||
|
||||
## Getting help [_getting_help_130]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-csv). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_129]
|
||||
|
||||
The CSV filter takes an event field containing CSV data, parses it, and stores it as individual fields with optionally-specified field names. This filter can parse data with any separator, not just commas.
|
||||
|
||||
|
||||
## Event Metadata and the Elastic Common Schema (ECS) [plugins-filters-csv-ecs_metadata]
|
||||
|
||||
The plugin behaves the same regardless of ECS compatibility, except giving a warning when ECS is enabled and `target` isn’t set.
|
||||
|
||||
::::{tip}
|
||||
Set the `target` option to avoid potential schema conflicts.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Csv Filter Configuration Options [plugins-filters-csv-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-csv-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`autodetect_column_names`](#plugins-filters-csv-autodetect_column_names) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`autogenerate_column_names`](#plugins-filters-csv-autogenerate_column_names) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`columns`](#plugins-filters-csv-columns) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`convert`](#plugins-filters-csv-convert) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`ecs_compatibility`](#plugins-filters-csv-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`quote_char`](#plugins-filters-csv-quote_char) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`separator`](#plugins-filters-csv-separator) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`skip_empty_columns`](#plugins-filters-csv-skip_empty_columns) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`skip_empty_rows`](#plugins-filters-csv-skip_empty_rows) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`skip_header`](#plugins-filters-csv-skip_header) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`source`](#plugins-filters-csv-source) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target`](#plugins-filters-csv-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-csv-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `autodetect_column_names` [plugins-filters-csv-autodetect_column_names]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Define whether column names should be auto-detected from the header column or not. Defaults to false.
|
||||
|
||||
Logstash pipeline workers must be set to `1` for this option to work.
|
||||
|
||||
|
||||
### `autogenerate_column_names` [plugins-filters-csv-autogenerate_column_names]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Define whether column names should autogenerated or not. Defaults to true. If set to false, columns not having a header specified will not be parsed.
|
||||
|
||||
|
||||
### `columns` [plugins-filters-csv-columns]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
Define a list of column names (in the order they appear in the CSV, as if it were a header line). If `columns` is not configured, or there are not enough columns specified, the default column names are "column1", "column2", etc. In the case that there are more columns in the data than specified in this column list, extra columns will be auto-numbered: (e.g. "user_defined_1", "user_defined_2", "column3", "column4", etc.)
|
||||
|
||||
|
||||
### `convert` [plugins-filters-csv-convert]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
Define a set of datatype conversions to be applied to columns. Possible conversions are integer, float, date, date_time, boolean
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
csv {
|
||||
convert => {
|
||||
"column1" => "integer"
|
||||
"column2" => "boolean"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-filters-csv-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: does not use ECS-compatible field names
|
||||
* `v1`: uses the value in `target` as field name
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). See [Event Metadata and the Elastic Common Schema (ECS)](#plugins-filters-csv-ecs_metadata) for detailed information.
|
||||
|
||||
|
||||
### `quote_char` [plugins-filters-csv-quote_char]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"\""`
|
||||
|
||||
Define the character used to quote CSV fields. If this is not specified the default is a double quote `"`. Optional.
|
||||
|
||||
|
||||
### `separator` [plugins-filters-csv-separator]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `","`
|
||||
|
||||
Define the column separator value. If this is not specified, the default is a comma `,`. If you want to define a tabulation as a separator, you need to set the value to the actual tab character and not `\t`. Optional.
|
||||
|
||||
|
||||
### `skip_empty_columns` [plugins-filters-csv-skip_empty_columns]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Define whether empty columns should be skipped. Defaults to false. If set to true, columns containing no value will not get set.
|
||||
|
||||
|
||||
### `skip_empty_rows` [plugins-filters-csv-skip_empty_rows]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Define whether empty rows could potentially be skipped. Defaults to false. If set to true, rows containing no value will be tagged with "_csvskippedemptyfield". This tag can referenced by users if they wish to cancel events using an *if* conditional statement.
|
||||
|
||||
|
||||
### `skip_header` [plugins-filters-csv-skip_header]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Define whether the header should be skipped. Defaults to false. If set to true, the header will be skipped. Assumes that header is not repeated within further rows as such rows will also be skipped. If `skip_header` is set without `autodetect_column_names` being set then columns should be set which will result in the skipping of any row that exactly matches the specified column values. If `skip_header` and `autodetect_column_names` are specified then columns should not be specified, in this case `autodetect_column_names` will fill the columns setting in the background, from the first event seen, and any subsequent values that match what was autodetected will be skipped.
|
||||
|
||||
Logstash pipeline workers must be set to `1` for this option to work.
|
||||
|
||||
|
||||
### `source` [plugins-filters-csv-source]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"message"`
|
||||
|
||||
The CSV data in the value of the `source` field will be expanded into a data structure.
|
||||
|
||||
|
||||
### `target` [plugins-filters-csv-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define target field for placing the data. Defaults to writing to the root of the event.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-csv-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-csv-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-csv-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-csv-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-csv-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-csv-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-csv-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-csv-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-csv-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
csv {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
csv {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-csv-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
csv {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
csv {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-csv-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-csv-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 csv filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
csv {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-csv-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-csv-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
csv {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
csv {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-csv-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
csv {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
csv {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,423 +0,0 @@
|
|||
---
|
||||
navigation_title: "date"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
|
||||
---
|
||||
|
||||
# Date filter plugin [plugins-filters-date]
|
||||
|
||||
|
||||
* Plugin version: v3.1.15
|
||||
* Released on: 2022-06-29
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-date/blob/v3.1.15/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-date-index.md).
|
||||
|
||||
## Getting help [_getting_help_131]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-date). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_130]
|
||||
|
||||
The date filter is used for parsing dates from fields, and then using that date or timestamp as the logstash timestamp for the event.
|
||||
|
||||
For example, syslog events usually have timestamps like this:
|
||||
|
||||
```ruby
|
||||
"Apr 17 09:32:01"
|
||||
```
|
||||
|
||||
You would use the date format `MMM dd HH:mm:ss` to parse this.
|
||||
|
||||
The date filter is especially important for sorting events and for backfilling old data. If you don’t get the date correct in your event, then searching for them later will likely sort out of order.
|
||||
|
||||
In the absence of this filter, logstash will choose a timestamp based on the first time it sees the event (at input time), if the timestamp is not already set in the event. For example, with file input, the timestamp is set to the time of each read.
|
||||
|
||||
|
||||
## Date Filter Configuration Options [plugins-filters-date-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-date-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`locale`](#plugins-filters-date-locale) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`match`](#plugins-filters-date-match) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`tag_on_failure`](#plugins-filters-date-tag_on_failure) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`target`](#plugins-filters-date-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`timezone`](#plugins-filters-date-timezone) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-date-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `locale` [plugins-filters-date-locale]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Specify a locale to be used for date parsing using either IETF-BCP47 or POSIX language tag. Simple examples are `en`,`en-US` for BCP47 or `en_US` for POSIX.
|
||||
|
||||
The locale is mostly necessary to be set for parsing month names (pattern with `MMM`) and weekday names (pattern with `EEE`).
|
||||
|
||||
If not specified, the platform default will be used but for non-english platform default an english parser will also be used as a fallback mechanism.
|
||||
|
||||
|
||||
### `match` [plugins-filters-date-match]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
An array with field name first, and format patterns following, `[ field, formats... ]`
|
||||
|
||||
If your time field has multiple possible formats, you can do this:
|
||||
|
||||
```ruby
|
||||
match => [ "logdate", "MMM dd yyyy HH:mm:ss",
|
||||
"MMM d yyyy HH:mm:ss", "ISO8601" ]
|
||||
```
|
||||
|
||||
The above will match a syslog (rfc3164) or `iso8601` timestamp.
|
||||
|
||||
There are a few special exceptions. The following format literals exist to help you save time and ensure correctness of date parsing.
|
||||
|
||||
* `ISO8601` - should parse any valid ISO8601 timestamp, such as `2011-04-19T03:44:01.103Z`
|
||||
* `UNIX` - will parse **float or int** value expressing unix time in seconds since epoch like 1326149001.132 as well as 1326149001
|
||||
* `UNIX_MS` - will parse **int** value expressing unix time in milliseconds since epoch like 1366125117000
|
||||
* `TAI64N` - will parse tai64n time values
|
||||
|
||||
For example, if you have a field `logdate`, with a value that looks like `Aug 13 2010 00:03:44`, you would use this configuration:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
date {
|
||||
match => [ "logdate", "MMM dd yyyy HH:mm:ss" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If your field is nested in your structure, you can use the nested syntax `[foo][bar]` to match its value. For more information, please refer to [Field references](/reference/event-dependent-configuration.md#logstash-config-field-references)
|
||||
|
||||
**More details on the syntax**
|
||||
|
||||
The syntax used for parsing date and time text uses letters to indicate the kind of time value (month, minute, etc), and a repetition of letters to indicate the form of that value (2-digit month, full month name, etc).
|
||||
|
||||
Here’s what you can use to parse dates and times:
|
||||
|
||||
y
|
||||
: year
|
||||
|
||||
yyyy
|
||||
: full year number. Example: `2015`.
|
||||
|
||||
yy
|
||||
: two-digit year. Example: `15` for the year 2015.
|
||||
|
||||
|
||||
M
|
||||
: month of the year
|
||||
|
||||
M
|
||||
: minimal-digit month. Example: `1` for January and `12` for December.
|
||||
|
||||
MM
|
||||
: two-digit month. zero-padded if needed. Example: `01` for January and `12` for December
|
||||
|
||||
MMM
|
||||
: abbreviated month text. Example: `Jan` for January. Note: The language used depends on your locale. See the `locale` setting for how to change the language.
|
||||
|
||||
MMMM
|
||||
: full month text, Example: `January`. Note: The language used depends on your locale.
|
||||
|
||||
|
||||
d
|
||||
: day of the month
|
||||
|
||||
d
|
||||
: minimal-digit day. Example: `1` for the 1st of the month.
|
||||
|
||||
dd
|
||||
: two-digit day, zero-padded if needed. Example: `01` for the 1st of the month.
|
||||
|
||||
|
||||
H
|
||||
: hour of the day (24-hour clock)
|
||||
|
||||
H
|
||||
: minimal-digit hour. Example: `0` for midnight.
|
||||
|
||||
HH
|
||||
: two-digit hour, zero-padded if needed. Example: `00` for midnight.
|
||||
|
||||
|
||||
m
|
||||
: minutes of the hour (60 minutes per hour)
|
||||
|
||||
m
|
||||
: minimal-digit minutes. Example: `0`.
|
||||
|
||||
mm
|
||||
: two-digit minutes, zero-padded if needed. Example: `00`.
|
||||
|
||||
|
||||
s
|
||||
: seconds of the minute (60 seconds per minute)
|
||||
|
||||
s
|
||||
: minimal-digit seconds. Example: `0`.
|
||||
|
||||
ss
|
||||
: two-digit seconds, zero-padded if needed. Example: `00`.
|
||||
|
||||
|
||||
S
|
||||
: fraction of a second **Maximum precision is milliseconds (`SSS`). Beyond that, zeroes are appended.**
|
||||
|
||||
S
|
||||
: tenths of a second. Example: `0` for a subsecond value `012`
|
||||
|
||||
SS
|
||||
: hundredths of a second. Example: `01` for a subsecond value `01`
|
||||
|
||||
SSS
|
||||
: thousandths of a second. Example: `012` for a subsecond value `012`
|
||||
|
||||
|
||||
Z
|
||||
: time zone offset or identity
|
||||
|
||||
Z
|
||||
: Timezone offset structured as HHmm (hour and minutes offset from Zulu/UTC). Example: `-0700`.
|
||||
|
||||
ZZ
|
||||
: Timezone offset structured as HH:mm (colon in between hour and minute offsets). Example: `-07:00`.
|
||||
|
||||
ZZZ
|
||||
: Timezone identity. Example: `America/Los_Angeles`. Note: Valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.html).
|
||||
|
||||
|
||||
z
|
||||
: time zone names. **Time zone names (*z*) cannot be parsed.**
|
||||
|
||||
w
|
||||
: week of the year
|
||||
|
||||
w
|
||||
: minimal-digit week. Example: `1`.
|
||||
|
||||
ww
|
||||
: two-digit week, zero-padded if needed. Example: `01`.
|
||||
|
||||
|
||||
D
|
||||
: day of the year
|
||||
|
||||
e
|
||||
: day of the week (number)
|
||||
|
||||
E
|
||||
: day of the week (text)
|
||||
|
||||
E, EE, EEE
|
||||
: Abbreviated day of the week. Example: `Mon`, `Tue`, `Wed`, `Thu`, `Fri`, `Sat`, `Sun`. Note: The actual language of this will depend on your locale.
|
||||
|
||||
EEEE
|
||||
: The full text day of the week. Example: `Monday`, `Tuesday`, … Note: The actual language of this will depend on your locale.
|
||||
|
||||
|
||||
For non-formatting syntax, you’ll need to put single-quote characters around the value. For example, if you were parsing ISO8601 time, "2015-01-01T01:12:23" that little "T" isn’t a valid time format, and you want to say "literally, a T", your format would be this: "yyyy-MM-dd’T’HH:mm:ss"
|
||||
|
||||
Other less common date units, such as era (G), century (C), am/pm (a), and # more, can be learned about on the [joda-time documentation](http://www.joda.org/joda-time/key_format.html).
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-date-tag_on_failure]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_dateparsefailure"]`
|
||||
|
||||
Append values to the `tags` field when there has been no successful match
|
||||
|
||||
|
||||
### `target` [plugins-filters-date-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"@timestamp"`
|
||||
|
||||
Store the matching timestamp into the given target field. If not provided, default to updating the `@timestamp` field of the event.
|
||||
|
||||
|
||||
### `timezone` [plugins-filters-date-timezone]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Specify a time zone canonical ID to be used for date parsing. The valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.html). This is useful in case the time zone cannot be extracted from the value, and is not the platform default. If this is not specified the platform default will be used. Canonical ID is good as it takes care of daylight saving time for you For example, `America/Los_Angeles` or `Europe/Paris` are valid IDs. This field can be dynamic and include parts of the event using the `%{{field}}` syntax
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-date-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-date-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-date-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-date-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-date-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-date-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-date-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-date-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-date-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
date {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
date {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-date-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
date {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
date {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-date-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-date-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 date filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
date {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-date-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-date-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
date {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
date {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-date-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
date {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
date {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
|
@ -1,249 +0,0 @@
|
|||
---
|
||||
navigation_title: "de_dot"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-de_dot.html
|
||||
---
|
||||
|
||||
# De_dot filter plugin [plugins-filters-de_dot]
|
||||
|
||||
|
||||
* Plugin version: v1.1.0
|
||||
* Released on: 2024-05-27
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-de_dot/blob/v1.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-de_dot-index.md).
|
||||
|
||||
## Getting help [_getting_help_132]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-de_dot). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_131]
|
||||
|
||||
This filter *appears* to rename fields by replacing `.` characters with a different separator. In reality, it’s a somewhat expensive filter that has to copy the source field contents to a new destination field (whose name no longer contains dots), and then remove the corresponding source field.
|
||||
|
||||
It should only be used if no other options are available.
|
||||
|
||||
|
||||
## De_dot Filter Configuration Options [plugins-filters-de_dot-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-de_dot-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`fields`](#plugins-filters-de_dot-fields) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`nested`](#plugins-filters-de_dot-nested) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`recursive`](#plugins-filters-de_dot-recursive) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`separator`](#plugins-filters-de_dot-separator) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-de_dot-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `fields` [plugins-filters-de_dot-fields]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The `fields` array should contain a list of known fields to act on. If undefined, all top-level fields will be checked. Sub-fields must be manually specified in the array. For example: `["field.suffix","[foo][bar.suffix]"]` will result in "field_suffix" and nested or sub field ["foo"]["bar_suffix"]
|
||||
|
||||
::::{warning}
|
||||
This is an expensive operation.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `nested` [plugins-filters-de_dot-nested]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
If `nested` is *true*, then create sub-fields instead of replacing dots with a different separator.
|
||||
|
||||
|
||||
### `recursive` [plugins-filters-de_dot-recursive]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
If `recursive` is *true*, then recursively check sub-fields. It is recommended you only use this when setting specific fields, as this is an expensive operation.
|
||||
|
||||
|
||||
### `separator` [plugins-filters-de_dot-separator]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"_"`
|
||||
|
||||
Replace dots with this value.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-de_dot-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-de_dot-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-de_dot-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-de_dot-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-de_dot-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-de_dot-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-de_dot-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-de_dot-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-de_dot-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
de_dot {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
de_dot {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-de_dot-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
de_dot {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
de_dot {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-de_dot-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-de_dot-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 de_dot filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
de_dot {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-de_dot-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-de_dot-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
de_dot {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
de_dot {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-de_dot-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
de_dot {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
de_dot {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,531 +0,0 @@
|
|||
---
|
||||
navigation_title: "dissect"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-dissect.html
|
||||
---
|
||||
|
||||
# Dissect filter plugin [plugins-filters-dissect]
|
||||
|
||||
|
||||
* Plugin version: v1.2.5
|
||||
* Released on: 2022-02-14
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-dissect/blob/v1.2.5/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-dissect-index.md).
|
||||
|
||||
## Getting help [_getting_help_133]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-dissect). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_132]
|
||||
|
||||
The Dissect filter plugin tokenizes incoming strings using defined patterns. It extracts unstructured event data into fields using delimiters. This process is called tokenization.
|
||||
|
||||
Unlike a regular split operation where one delimiter is applied to the whole string, the Dissect operation applies a set of delimiters to a string value.
|
||||
|
||||
::::{note}
|
||||
All keys must be found and extracted for tokenization to be successful. If one or more keys cannot be found, an error occurs and the original event is not modified.
|
||||
::::
|
||||
|
||||
|
||||
### Dissect or Grok? Or both? [_dissect_or_grok_or_both]
|
||||
|
||||
Dissect differs from Grok in that it does not use regular expressions and is faster. Dissect works well when data is reliably repeated. Grok is a better choice when the structure of your text varies from line to line.
|
||||
|
||||
You can use both Dissect and Grok for a hybrid use case when a section of the line is reliably repeated, but the entire line is not. The Dissect filter can deconstruct the section of the line that is repeated. The Grok filter can process the remaining field values with more regex predictability.
|
||||
|
||||
|
||||
### Terminology [_terminology]
|
||||
|
||||
**dissect pattern** - the set of fields and delimiters describing the textual format. Also known as a dissection. The dissection is described using a set of `%{}` sections: `%{{a}} - %{{b}} - %{{c}}`
|
||||
|
||||
**field** - the text from `%{` to `}` inclusive.
|
||||
|
||||
**delimiter** - the text between `}` and the next `%{` characters. Any set of characters other than `%{`, `'not }'`, or `}` is a delimiter.
|
||||
|
||||
**key** - the text between the `%{` and `}`, exclusive of the `?`, `+`, `&` prefixes and the ordinal suffix.
|
||||
|
||||
Examples:
|
||||
|
||||
`%{?aaa}` - the key is `aaa`
|
||||
|
||||
`%{+bbb/3}` - the key is `bbb`
|
||||
|
||||
`%{&ccc}` - the key is `ccc`
|
||||
|
||||
::::{note}
|
||||
Using the `.` (dot) as `key` will generate fields with `.` in the field name. If you want to get nested fields, use the brackets notation such as `%{[fieldname][subfieldname]}`.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### Sample configuration [_sample_configuration_2]
|
||||
|
||||
The config might look like this:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
dissect {
|
||||
mapping => {
|
||||
"message" => "%{ts} %{+ts} %{+ts} %{src} %{} %{prog}[%{pid}]: %{msg}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
When a string is dissected from left to right, text is captured up to the first delimiter. The captured text is stored in the first field. This is repeated for each field/# delimiter pair until the last delimiter is reached. Then **the remaining text is stored in the last field**.
|
||||
|
||||
|
||||
|
||||
## Notations [_notations]
|
||||
|
||||
[Normal field notation](#plugins-filters-dissect-normal)
|
||||
|
||||
[Skip field notation](#plugins-filters-dissect-skip)
|
||||
|
||||
[Append field notation](#plugins-filters-dissect-append)
|
||||
|
||||
[Indirect field notation](#plugins-filters-dissect-indirect)
|
||||
|
||||
### Notes and usage guidelines [_notes_and_usage_guidelines]
|
||||
|
||||
* For append or indirect fields, the key can refer to a field that already exists in the event before dissection.
|
||||
* Use a Skip field if you do not want the indirection key/value stored.
|
||||
|
||||
Example:
|
||||
|
||||
`%{?a}: %{&a}` applied to text `google: 77.98` will build a key/value of `google => 77.98`.
|
||||
|
||||
* Append and indirect cannot be combined.
|
||||
|
||||
Examples:
|
||||
|
||||
`%{+&something}` will add a value to the `&something` key (probably not the intended outcome).
|
||||
|
||||
`%{&+something}` will add a value to the `+something` key (again probably unintended).
|
||||
|
||||
|
||||
|
||||
### Normal field notation [plugins-filters-dissect-normal]
|
||||
|
||||
The found value is added to the Event using the key. A normal field has no prefix or suffix.
|
||||
|
||||
Example:
|
||||
|
||||
`%{{some_field}}`
|
||||
|
||||
|
||||
### Skip field notation [plugins-filters-dissect-skip]
|
||||
|
||||
The found value is stored internally, but is not added to the Event. The key, if supplied, is prefixed with a `?`.
|
||||
|
||||
Examples:
|
||||
|
||||
`%{}` is an empty skip field.
|
||||
|
||||
`%{?foo}` is a named skip field.
|
||||
|
||||
|
||||
### Append field notation [plugins-filters-dissect-append]
|
||||
|
||||
If the value is the first field seen, it is stored. Subsequent fields are appended to another value.
|
||||
|
||||
The key is prefixed with a `+`. The final value is stored in the Event using the key.
|
||||
|
||||
::::{note}
|
||||
The delimiter found before the field is appended with the value. If no delimiter is found before the field, a single space character is used.
|
||||
::::
|
||||
|
||||
|
||||
Examples:
|
||||
|
||||
`%{+some_field}` is an append field.
|
||||
|
||||
`%{+some_field/2}` is an append field with an order modifier.
|
||||
|
||||
**Order modifiers**
|
||||
|
||||
An order modifier, `/digits`, allows one to reorder the append sequence.
|
||||
|
||||
Example:
|
||||
|
||||
For text `1 2 3 go`, this `%{+a/2} %{+a/1} %{+a/4} %{+a/3}` will build a key/value of `a => 2 1 go 3`.
|
||||
|
||||
**Append fields** without an order modifier will append in declared order.
|
||||
|
||||
Example:
|
||||
|
||||
For text `1 2 3 go`, this `%{{a}} %{{b}} %{+a}` will build two key/values of `a => 1 3 go, b => 2`
|
||||
|
||||
|
||||
### Indirect field notation [plugins-filters-dissect-indirect]
|
||||
|
||||
The found value is added to the Event using the found value of another field as the key. The key is prefixed with a `&`.
|
||||
|
||||
Examples:
|
||||
|
||||
`%{&some_field}` is an indirect field where the key is indirectly sourced from the value of `some_field`.
|
||||
|
||||
For text `error: some_error, some_description`, this notation `error: %{?err}, %{&err}` will build a key/value of `some_error => some_description`.
|
||||
|
||||
|
||||
|
||||
## Multiple Consecutive Delimiter Handling [_multiple_consecutive_delimiter_handling]
|
||||
|
||||
::::{important}
|
||||
Multiple found delimiter handling has changed starting with version 1.1.1 of this plugin. Now multiple consecutive delimiters are seen as missing fields by default and not padding. If you are already using Dissect and your source text has fields padded with extra delimiters, you will need to change your config. Please read the section below.
|
||||
::::
|
||||
|
||||
|
||||
### Empty data between delimiters [_empty_data_between_delimiters]
|
||||
|
||||
Given this text as the sample used to create a dissection:
|
||||
|
||||
```ruby
|
||||
John Smith,Big Oaks,Wood Lane,Hambledown,Canterbury,CB34RY
|
||||
```
|
||||
|
||||
The created dissection, with 6 fields, is:
|
||||
|
||||
```ruby
|
||||
%{name},%{addr1},%{addr2},%{addr3},%{city},%{zip}
|
||||
```
|
||||
|
||||
When a line like this is processed:
|
||||
|
||||
```ruby
|
||||
Jane Doe,4321 Fifth Avenue,,,New York,87432
|
||||
```
|
||||
|
||||
Dissect will create an event with empty fields for `addr2 and addr3` like so:
|
||||
|
||||
```ruby
|
||||
{
|
||||
"name": "Jane Doe",
|
||||
"addr1": "4321 Fifth Avenue",
|
||||
"addr2": "",
|
||||
"addr3": "",
|
||||
"city": "New York"
|
||||
"zip": "87432"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### Delimiters used as padding to visually align fields [_delimiters_used_as_padding_to_visually_align_fields]
|
||||
|
||||
**Padding to the right hand side**
|
||||
|
||||
Given these texts as the samples used to create a dissection:
|
||||
|
||||
```ruby
|
||||
00000043 ViewReceive machine-321
|
||||
f3000a3b Calc machine-123
|
||||
```
|
||||
|
||||
The dissection, with 3 fields, is:
|
||||
|
||||
```ruby
|
||||
%{id} %{function->} %{server}
|
||||
```
|
||||
|
||||
Note, above, the second field has a `->` suffix which tells Dissect to ignore padding to its right.<br> Dissect will create these events:
|
||||
|
||||
```ruby
|
||||
{
|
||||
"id": "00000043",
|
||||
"function": "ViewReceive",
|
||||
"server": "machine-123"
|
||||
}
|
||||
{
|
||||
"id": "f3000a3b",
|
||||
"function": "Calc",
|
||||
"server": "machine-321"
|
||||
}
|
||||
```
|
||||
|
||||
::::{important}
|
||||
Always add the `->` suffix to the field on the left of the padding.
|
||||
::::
|
||||
|
||||
|
||||
**Padding to the left hand side (to the human eye)**
|
||||
|
||||
Given these texts as the samples used to create a dissection:
|
||||
|
||||
```ruby
|
||||
00000043 ViewReceive machine-321
|
||||
f3000a3b Calc machine-123
|
||||
```
|
||||
|
||||
The dissection, with 3 fields, is now:
|
||||
|
||||
```ruby
|
||||
%{id->} %{function} %{server}
|
||||
```
|
||||
|
||||
Here the `->` suffix moves to the `id` field because Dissect sees the padding as being to the right of the `id` field.<br>
|
||||
|
||||
|
||||
|
||||
## Conditional processing [_conditional_processing]
|
||||
|
||||
You probably want to use this filter inside an `if` block. This ensures that the event contains a field value with a suitable structure for the dissection.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
if [type] == "syslog" or "syslog" in [tags] {
|
||||
dissect {
|
||||
mapping => {
|
||||
"message" => "%{ts} %{+ts} %{+ts} %{src} %{} %{prog}[%{pid}]: %{msg}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Dissect Filter Configuration Options [plugins-filters-dissect-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-dissect-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`convert_datatype`](#plugins-filters-dissect-convert_datatype) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`mapping`](#plugins-filters-dissect-mapping) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`tag_on_failure`](#plugins-filters-dissect-tag_on_failure) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-dissect-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `convert_datatype` [plugins-filters-dissect-convert_datatype]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
With this setting `int` and `float` datatype conversions can be specified. These will be done after all `mapping` dissections have taken place. Feel free to use this setting on its own without a `mapping` section.
|
||||
|
||||
**Example**
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
dissect {
|
||||
convert_datatype => {
|
||||
"cpu" => "float"
|
||||
"code" => "int"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `mapping` [plugins-filters-dissect-mapping]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
A hash of dissections of `field => value`<br>
|
||||
|
||||
::::{important}
|
||||
Don’t use an escaped newline `\n` in the value. It will be interpreted as two characters `\` + `n`. Instead use actual line breaks in the config. Also use single quotes to define the value if it contains double quotes.
|
||||
::::
|
||||
|
||||
|
||||
A later dissection can be done on values from a previous dissection or they can be independent.
|
||||
|
||||
**Example**
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
dissect {
|
||||
mapping => {
|
||||
# using an actual line break
|
||||
"message" => '"%{field1}" "%{field2}"
|
||||
"%{description}"'
|
||||
"description" => "%{field3} %{field4} %{field5}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This is useful if you want to keep the field `description` but also dissect it further.
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-dissect-tag_on_failure]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_dissectfailure"]`
|
||||
|
||||
Append values to the `tags` field when dissection fails
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-dissect-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-dissect-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-dissect-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-dissect-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-dissect-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-dissect-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-dissect-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-dissect-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-dissect-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
dissect {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
dissect {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-dissect-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
dissect {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
dissect {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-dissect-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-dissect-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 dissect filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
dissect {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-dissect-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-dissect-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
dissect {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
dissect {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-dissect-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
dissect {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
dissect {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
|
@ -1,347 +0,0 @@
|
|||
---
|
||||
navigation_title: "dns"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-dns.html
|
||||
---
|
||||
|
||||
# Dns filter plugin [plugins-filters-dns]
|
||||
|
||||
|
||||
* Plugin version: v3.2.0
|
||||
* Released on: 2023-01-26
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-dns/blob/v3.2.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-dns-index.md).
|
||||
|
||||
## Getting help [_getting_help_134]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-dns). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_133]
|
||||
|
||||
The DNS filter performs a lookup (either an A record/CNAME record lookup or a reverse lookup at the PTR record) on records specified under the `reverse` arrays or respectively under the `resolve` arrays.
|
||||
|
||||
The config should look like this:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
dns {
|
||||
reverse => [ "source_host", "field_with_address" ]
|
||||
resolve => [ "field_with_fqdn" ]
|
||||
action => "replace"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This filter, like all filters, only processes 1 event at a time, so the use of this plugin can significantly slow down your pipeline’s throughput if you have a high latency network. By way of example, if each DNS lookup takes 2 milliseconds, the maximum throughput you can achieve with a single filter worker is 500 events per second (1000 milliseconds / 2 milliseconds).
|
||||
|
||||
|
||||
## Dns Filter Configuration Options [plugins-filters-dns-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-dns-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`action`](#plugins-filters-dns-action) | [string](/reference/configuration-file-structure.md#string), one of `["append", "replace"]` | No |
|
||||
| [`failed_cache_size`](#plugins-filters-dns-failed_cache_size) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`failed_cache_ttl`](#plugins-filters-dns-failed_cache_ttl) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`hit_cache_size`](#plugins-filters-dns-hit_cache_size) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`hit_cache_ttl`](#plugins-filters-dns-hit_cache_ttl) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`hostsfile`](#plugins-filters-dns-hostsfile) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`max_retries`](#plugins-filters-dns-max_retries) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`nameserver`](#plugins-filters-dns-nameserver) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`resolve`](#plugins-filters-dns-resolve) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`reverse`](#plugins-filters-dns-reverse) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`tag_on_timeout`](#plugins-filters-dns-tag_on_timeout) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`timeout`](#plugins-filters-dns-timeout) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-dns-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `action` [plugins-filters-dns-action]
|
||||
|
||||
* Value can be any of: `append`, `replace`
|
||||
* Default value is `"append"`
|
||||
|
||||
Determine what action to do: append or replace the values in the fields specified under `reverse` and `resolve`.
|
||||
|
||||
|
||||
### `failed_cache_size` [plugins-filters-dns-failed_cache_size]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `0` (cache disabled)
|
||||
|
||||
cache size for failed requests
|
||||
|
||||
|
||||
### `failed_cache_ttl` [plugins-filters-dns-failed_cache_ttl]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `5`
|
||||
|
||||
how long to cache failed requests (in seconds)
|
||||
|
||||
|
||||
### `hit_cache_size` [plugins-filters-dns-hit_cache_size]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `0` (cache disabled)
|
||||
|
||||
set the size of cache for successful requests
|
||||
|
||||
|
||||
### `hit_cache_ttl` [plugins-filters-dns-hit_cache_ttl]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `60`
|
||||
|
||||
how long to cache successful requests (in seconds)
|
||||
|
||||
|
||||
### `hostsfile` [plugins-filters-dns-hostsfile]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Use custom hosts file(s). For example: `["/var/db/my_custom_hosts"]`
|
||||
|
||||
|
||||
### `max_retries` [plugins-filters-dns-max_retries]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `2`
|
||||
|
||||
number of times to retry a failed resolve/reverse
|
||||
|
||||
|
||||
### `nameserver` [plugins-filters-dns-nameserver]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash), and is composed of:
|
||||
|
||||
* a required `address` key, whose value is either a [string](/reference/configuration-file-structure.md#string) or an [array](/reference/configuration-file-structure.md#array), representing one or more nameserver ip addresses
|
||||
* an optional `search` key, whose value is either a [string](/reference/configuration-file-structure.md#string) or an [array](/reference/configuration-file-structure.md#array), representing between one and six search domains (e.g., with search domain `com`, a query for `example` will match DNS entries for `example.com`)
|
||||
* an optional `ndots` key, used in conjunction with `search`, whose value is a [number](/reference/configuration-file-structure.md#number), representing the minimum number of dots in a domain name being resolved that will *prevent* the search domains from being used (default `1`; this option is rarely needed)
|
||||
|
||||
* For backward-compatibility, values of [string](/reference/configuration-file-structure.md#string) and [array](/reference/configuration-file-structure.md#array) are also accepted, representing one or more nameserver ip addresses *without* search domains.
|
||||
* There is no default value for this setting.
|
||||
|
||||
Use custom nameserver(s). For example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
dns {
|
||||
nameserver => {
|
||||
address => ["8.8.8.8", "8.8.4.4"]
|
||||
search => ["internal.net"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If `nameserver` is not specified then `/etc/resolv.conf` will be read to configure the resolver using the `nameserver`, `domain`, `search` and `ndots` directives in `/etc/resolv.conf`.
|
||||
|
||||
|
||||
### `resolve` [plugins-filters-dns-resolve]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Forward resolve one or more fields.
|
||||
|
||||
|
||||
### `reverse` [plugins-filters-dns-reverse]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Reverse resolve one or more fields.
|
||||
|
||||
|
||||
### `timeout` [plugins-filters-dns-timeout]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `0.5`
|
||||
|
||||
`resolv` calls will be wrapped in a timeout instance
|
||||
|
||||
|
||||
### `tag_on_timeout` [plugins-filters-dns-tag_on_timeout]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Defaults to `["_dnstimeout"]`.
|
||||
|
||||
Add tag(s) on DNS lookup time out.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-dns-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-dns-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-dns-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-dns-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-dns-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-dns-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-dns-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-dns-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-dns-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
dns {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
dns {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-dns-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
dns {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
dns {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-dns-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-dns-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 dns filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
dns {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-dns-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-dns-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
dns {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
dns {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-dns-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
dns {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
dns {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,242 +0,0 @@
|
|||
---
|
||||
navigation_title: "drop"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-drop.html
|
||||
---
|
||||
|
||||
# Drop filter plugin [plugins-filters-drop]
|
||||
|
||||
|
||||
* Plugin version: v3.0.5
|
||||
* Released on: 2017-11-07
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-drop/blob/v3.0.5/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-drop-index.md).
|
||||
|
||||
## Getting help [_getting_help_135]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-drop). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_134]
|
||||
|
||||
Drop filter.
|
||||
|
||||
Drops everything that gets to this filter.
|
||||
|
||||
This is best used in combination with conditionals, for example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
if [loglevel] == "debug" {
|
||||
drop { }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above will only pass events to the drop filter if the loglevel field is `debug`. This will cause all events matching to be dropped.
|
||||
|
||||
|
||||
## Drop Filter Configuration Options [plugins-filters-drop-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-drop-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`percentage`](#plugins-filters-drop-percentage) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-drop-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `percentage` [plugins-filters-drop-percentage]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `100`
|
||||
|
||||
Drop all the events within a pre-configured percentage.
|
||||
|
||||
This is useful if you just need a percentage but not the whole.
|
||||
|
||||
Example, to only drop around 40% of the events that have the field loglevel with value "debug".
|
||||
|
||||
```
|
||||
filter {
|
||||
if [loglevel] == "debug" {
|
||||
drop {
|
||||
percentage => 40
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Common options [plugins-filters-drop-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-drop-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-drop-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-drop-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-drop-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-drop-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-drop-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-drop-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-drop-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
drop {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
drop {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-drop-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
drop {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
drop {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-drop-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-drop-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 drop filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
drop {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-drop-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-drop-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
drop {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
drop {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-drop-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
drop {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
drop {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,326 +0,0 @@
|
|||
---
|
||||
navigation_title: "elapsed"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-elapsed.html
|
||||
---
|
||||
|
||||
# Elapsed filter plugin [plugins-filters-elapsed]
|
||||
|
||||
|
||||
* Plugin version: v4.1.0
|
||||
* Released on: 2018-07-31
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-elapsed/blob/v4.1.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-elapsed-index.md).
|
||||
|
||||
## Installation [_installation_58]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-elapsed`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_136]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-elapsed). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_135]
|
||||
|
||||
The elapsed filter tracks a pair of start/end events and uses their timestamps to calculate the elapsed time between them.
|
||||
|
||||
The filter has been developed to track the execution time of processes and other long tasks.
|
||||
|
||||
The configuration looks like this:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
elapsed {
|
||||
start_tag => "start event tag"
|
||||
end_tag => "end event tag"
|
||||
unique_id_field => "id field name"
|
||||
timeout => seconds
|
||||
new_event_on_match => true/false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The events managed by this filter must have some particular properties. The event describing the start of the task (the "start event") must contain a tag equal to `start_tag`. On the other side, the event describing the end of the task (the "end event") must contain a tag equal to `end_tag`. Both these two kinds of event need to own an ID field which identify uniquely that particular task. The name of this field is stored in `unique_id_field`.
|
||||
|
||||
You can use a Grok filter to prepare the events for the elapsed filter. An example of configuration can be:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => { "message" => "%{TIMESTAMP_ISO8601} START id: (?<task_id>.*)" }
|
||||
add_tag => [ "taskStarted" ]
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
grok {
|
||||
match => { "message" => "%{{TIMESTAMP_ISO8601}} END id: (?<task_id>.*)" }
|
||||
add_tag => [ "taskTerminated" ]
|
||||
}
|
||||
```
|
||||
```
|
||||
elapsed {
|
||||
start_tag => "taskStarted"
|
||||
end_tag => "taskTerminated"
|
||||
unique_id_field => "task_id"
|
||||
}
|
||||
}
|
||||
```
|
||||
The elapsed filter collects all the "start events". If two, or more, "start events" have the same ID, only the first one is recorded, the others are discarded.
|
||||
|
||||
When an "end event" matching a previously collected "start event" is received, there is a match. The configuration property `new_event_on_match` tells where to insert the elapsed information: they can be added to the "end event" or a new "match event" can be created. Both events store the following information:
|
||||
|
||||
* the tags `elapsed` and `elapsed_match`
|
||||
* the field `elapsed_time` with the difference, in seconds, between the two events timestamps
|
||||
* an ID filed with the task ID
|
||||
* the field `elapsed_timestamp_start` with the timestamp of the start event
|
||||
|
||||
If the "end event" does not arrive before "timeout" seconds, the "start event" is discarded and an "expired event" is generated. This event contains:
|
||||
|
||||
* the tags `elapsed` and `elapsed_expired_error`
|
||||
* a field called `elapsed_time` with the age, in seconds, of the "start event"
|
||||
* an ID filed with the task ID
|
||||
* the field `elapsed_timestamp_start` with the timestamp of the "start event"
|
||||
|
||||
|
||||
## Elapsed Filter Configuration Options [plugins-filters-elapsed-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-elapsed-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`end_tag`](#plugins-filters-elapsed-end_tag) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`new_event_on_match`](#plugins-filters-elapsed-new_event_on_match) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`start_tag`](#plugins-filters-elapsed-start_tag) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`timeout`](#plugins-filters-elapsed-timeout) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`unique_id_field`](#plugins-filters-elapsed-unique_id_field) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`keep_start_event`](#plugins-filters-elapsed-keep_start_event) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-elapsed-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `end_tag` [plugins-filters-elapsed-end_tag]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The name of the tag identifying the "end event"
|
||||
|
||||
|
||||
### `new_event_on_match` [plugins-filters-elapsed-new_event_on_match]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
This property manage what to do when an "end event" matches a "start event". If it’s set to `false` (default value), the elapsed information are added to the "end event"; if it’s set to `true` a new "match event" is created.
|
||||
|
||||
|
||||
### `start_tag` [plugins-filters-elapsed-start_tag]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The name of the tag identifying the "start event"
|
||||
|
||||
|
||||
### `timeout` [plugins-filters-elapsed-timeout]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `1800`
|
||||
|
||||
The amount of seconds after an "end event" can be considered lost. The corresponding "start event" is discarded and an "expired event" is generated. The default value is 30 minutes (1800 seconds).
|
||||
|
||||
|
||||
### `unique_id_field` [plugins-filters-elapsed-unique_id_field]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The name of the field containing the task ID. This value must uniquely identify the task in the system, otherwise it’s impossible to match the couple of events.
|
||||
|
||||
|
||||
### `keep_start_event` [plugins-filters-elapsed-keep_start_event]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `first`
|
||||
|
||||
This property manages what to do when several events matched as a start one were received before the end event for the specified ID. There are two supported values: `first` or `last`. If it’s set to `first` (default value), the first event matched as a start will be used; if it’s set to `last`, the last one will be used.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-elapsed-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-elapsed-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-elapsed-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-elapsed-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-elapsed-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-elapsed-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-elapsed-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-elapsed-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-elapsed-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elapsed {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
elapsed {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-elapsed-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elapsed {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
elapsed {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-elapsed-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-elapsed-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 elapsed filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
elapsed {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-elapsed-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-elapsed-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elapsed {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
elapsed {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-elapsed-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elapsed {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
elapsed {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,674 +0,0 @@
|
|||
---
|
||||
navigation_title: "elastic_integration"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-elastic_integration.html
|
||||
---
|
||||
|
||||
# Elastic Integration filter plugin [plugins-filters-elastic_integration]
|
||||
|
||||
|
||||
* Plugin version: v8.17.0
|
||||
* Released on: 2025-01-08
|
||||
* [Changelog](https://github.com/elastic/logstash-filter-elastic_integration/blob/v8.17.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-elastic_integration-index.md).
|
||||
|
||||
## Getting help [_getting_help_137]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-elastic_integration). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
::::{admonition} Elastic Enterprise License
|
||||
Use of this plugin requires an active Elastic Enterprise [subscription](https://www.elastic.co/subscriptions).
|
||||
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Description [_description_136]
|
||||
|
||||
Use this filter to process Elastic integrations powered by {{es}} Ingest Node in {{ls}}.
|
||||
|
||||
::::{admonition} Extending Elastic integrations with {ls}
|
||||
This plugin can help you take advantage of the extensive, built-in capabilities of [Elastic {{integrations}}](integration-docs://reference/index.md)—such as managing data collection, transformation, and visualization—and then use {{ls}} for additional data processing and output options. For more info about extending Elastic integrations with {{ls}}, check out [Using {{ls}} with Elastic Integrations](/reference/using-logstash-with-elastic-integrations.md).
|
||||
|
||||
::::
|
||||
|
||||
|
||||
When you configure this filter to point to an {{es}} cluster, it detects which ingest pipeline (if any) should be executed for each event, using an explicitly-defined [`pipeline_name`](#plugins-filters-elastic_integration-pipeline_name) or auto-detecting the event’s data-stream and its default pipeline.
|
||||
|
||||
It then loads that pipeline’s definition from {{es}} and run that pipeline inside Logstash without transmitting the event to {{es}}. Events that are successfully handled by their ingest pipeline will have `[@metadata][target_ingest_pipeline]` set to `_none` so that any downstream {{es}} output in the Logstash pipeline will avoid running the event’s default pipeline *again* in {{es}}.
|
||||
|
||||
::::{note}
|
||||
Some multi-pipeline configurations such as logstash-to-logstash over http(s) do not maintain the state of `[@metadata]` fields. In these setups, you may need to explicitly configure your downstream pipeline’s {{es}} output with `pipeline => "_none"` to avoid re-running the default pipeline.
|
||||
::::
|
||||
|
||||
|
||||
Events that *fail* ingest pipeline processing will be tagged with `_ingest_pipeline_failure`, and their `[@metadata][_ingest_pipeline_failure]` will be populated with details as a key/value map.
|
||||
|
||||
### Requirements and upgrade guidance [plugins-filters-elastic_integration-requirements]
|
||||
|
||||
* This plugin requires Java 17 minimum with {{ls}} `8.x` versions and Java 21 minimum with {{ls}} `9.x` versions.
|
||||
* When you upgrade the {{stack}}, upgrade {{ls}} (or this plugin specifically) *before* you upgrade {{kib}}. (Note that this requirement is a departure from the typical {{stack}} [installation order](docs-content://get-started/installing-elastic-stack.md#install-order-elastic-stack).)
|
||||
|
||||
The {{es}}-{{ls}}-{{kib}} installation order ensures the best experience with {{agent}}-managed pipelines, and embeds functionality from a version of {{es}} Ingest Node that is compatible with the plugin version (`major`.`minor`).
|
||||
|
||||
|
||||
|
||||
### Using `filter-elastic_integration` with `output-elasticsearch` [plugins-filters-elastic_integration-es-tips]
|
||||
|
||||
Elastic {{integrations}} are designed to work with [data streams](/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data-streams) and [ECS-compatible](/reference/plugins-outputs-elasticsearch.md#_compatibility_with_the_elastic_common_schema_ecs) output. Be sure that these features are enabled in the [`output-elasticsearch`](/reference/plugins-outputs-elasticsearch.md) plugin.
|
||||
|
||||
* Set [`data-stream`](/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) to `true`.<br> (Check out [Data streams](/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data-streams) for additional data streams settings.)
|
||||
* Set [`ecs-compatibility`](/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-ecs_compatibility) to `v1` or `v8`.
|
||||
|
||||
Check out the [`output-elasticsearch` plugin](/reference/plugins-outputs-elasticsearch.md) docs for additional settings.
|
||||
|
||||
|
||||
|
||||
## Minimum configuration [plugins-filters-elastic_integration-minimum_configuration]
|
||||
|
||||
You will need to configure this plugin to connect to {{es}}, and may need to also need to provide local GeoIp databases.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
elastic_integration {
|
||||
cloud_id => "YOUR_CLOUD_ID_HERE"
|
||||
cloud_auth => "YOUR_CLOUD_AUTH_HERE"
|
||||
geoip_database_directory => "/etc/your/geoip-databases"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Read on for a guide to configuration, or jump to the [complete list of configuration options](#plugins-filters-elastic_integration-options).
|
||||
|
||||
|
||||
## Connecting to {{es}} [plugins-filters-elastic_integration-connecting_to_elasticsearch]
|
||||
|
||||
This plugin communicates with {{es}} to identify which ingest pipeline should be run for a given event, and to retrieve the ingest pipeline definitions themselves. You must configure this plugin to point to {{es}} using exactly one of:
|
||||
|
||||
* A Cloud Id (see [`cloud_id`](#plugins-filters-elastic_integration-cloud_id))
|
||||
* A list of one or more host URLs (see [`hosts`](#plugins-filters-elastic_integration-hosts))
|
||||
|
||||
Communication will be made securely over SSL unless you explicitly configure this plugin otherwise.
|
||||
|
||||
You may need to configure how this plugin establishes trust of the server that responds, and will likely need to configure how this plugin presents its own identity or credentials.
|
||||
|
||||
### SSL Trust Configuration [_ssl_trust_configuration]
|
||||
|
||||
When communicating over SSL, this plugin fully-validates the proof-of-identity presented by {{es}} using the system trust store. You can provide an *alternate* source of trust with one of:
|
||||
|
||||
* A PEM-formatted list of trusted certificate authorities (see [`ssl_certificate_authorities`](#plugins-filters-elastic_integration-ssl_certificate_authorities))
|
||||
* A JKS- or PKCS12-formatted Keystore containing trusted certificates (see [`ssl_truststore_path`](#plugins-filters-elastic_integration-ssl_truststore_path))
|
||||
|
||||
You can also configure which aspects of the proof-of-identity are verified (see [`ssl_verification_mode`](#plugins-filters-elastic_integration-ssl_verification_mode)).
|
||||
|
||||
|
||||
### SSL Identity Configuration [_ssl_identity_configuration]
|
||||
|
||||
When communicating over SSL, you can also configure this plugin to present a certificate-based proof-of-identity to the {{es}} cluster it connects to using one of:
|
||||
|
||||
* A PKCS8 Certificate/Key pair (see [`ssl_certificate`](#plugins-filters-elastic_integration-ssl_certificate))
|
||||
* A JKS- or PKCS12-formatted Keystore (see [`ssl_keystore_path`](#plugins-filters-elastic_integration-ssl_keystore_path))
|
||||
|
||||
|
||||
### Request Identity [_request_identity]
|
||||
|
||||
You can configure this plugin to present authentication credentials to {{es}} in one of several ways:
|
||||
|
||||
* ApiKey: (see [`api_key`](#plugins-filters-elastic_integration-api_key))
|
||||
* Cloud Auth: (see [`cloud_auth`](#plugins-filters-elastic_integration-cloud_auth))
|
||||
* HTTP Basic Auth: (see [`username`](#plugins-filters-elastic_integration-username) and [`password`](#plugins-filters-elastic_integration-password))
|
||||
|
||||
::::{note}
|
||||
Your request credentials are only as secure as the connection they are being passed over. They provide neither privacy nor secrecy on their own, and can easily be recovered by an adversary when SSL is disabled.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
|
||||
## Minimum required privileges [plugins-filters-elastic_integration-minimum_required_privileges]
|
||||
|
||||
This plugin communicates with Elasticsearch to resolve events into pipeline definitions and needs to be configured with credentials with appropriate privileges to read from the relevant APIs. At the startup phase, this plugin confirms that current user has sufficient privileges, including:
|
||||
|
||||
| Privilege name | Description |
|
||||
| --- | --- |
|
||||
| `monitor` | A read-only privilege for cluster operations such as cluster health or state. Plugin requires it when checks {{es}} license. |
|
||||
| `read_pipeline` | A read-only get and simulate access to ingest pipeline. It is required when plugin reads {{es}} ingest pipeline definitions. |
|
||||
| `manage_index_templates` | All operations on index templates privilege. It is required when plugin resolves default pipeline based on event data stream name. |
|
||||
|
||||
::::{note}
|
||||
This plugin cannot determine if an anonymous user has the required privileges when it connects to an {{es}} cluster that has security features disabled or when the user does not provide credentials. The plugin starts in an unsafe mode with a runtime error indicating that API permissions are insufficient, and prevents events from being processed by the ingest pipeline.
|
||||
|
||||
To avoid these issues, set up user authentication and ensure that security in {{es}} is enabled (default).
|
||||
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Supported Ingest Processors [plugins-filters-elastic_integration-supported_ingest_processors]
|
||||
|
||||
This filter can run {{es}} Ingest Node pipelines that are *wholly* comprised of the supported subset of processors. It has access to the Painless and Mustache scripting engines where applicable:
|
||||
|
||||
| Source | Processor | Caveats |
|
||||
| --- | --- | --- |
|
||||
| Ingest Common | `append` | *none* |
|
||||
| `bytes` | *none* |
|
||||
| `communityid` | *none* |
|
||||
| `convert` | *none* |
|
||||
| `csv` | *none* |
|
||||
| `date` | *none* |
|
||||
| `dateindexname` | *none* |
|
||||
| `dissect` | *none* |
|
||||
| `dotexpander` | *none* |
|
||||
| `drop` | *none* |
|
||||
| `fail` | *none* |
|
||||
| `fingerprint` | *none* |
|
||||
| `foreach` | *none* |
|
||||
| `grok` | *none* |
|
||||
| `gsub` | *none* |
|
||||
| `htmlstrip` | *none* |
|
||||
| `join` | *none* |
|
||||
| `json` | *none* |
|
||||
| `keyvalue` | *none* |
|
||||
| `lowercase` | *none* |
|
||||
| `networkdirection` | *none* |
|
||||
| `pipeline` | resolved pipeline *must* be wholly-composed of supported processors |
|
||||
| `registereddomain` | *none* |
|
||||
| `remove` | *none* |
|
||||
| `rename` | *none* |
|
||||
| `reroute` | *none* |
|
||||
| `script` | `lang` must be `painless` (default) |
|
||||
| `set` | *none* |
|
||||
| `sort` | *none* |
|
||||
| `split` | *none* |
|
||||
| `trim` | *none* |
|
||||
| `uppercase` | *none* |
|
||||
| `uri_parts` | *none* |
|
||||
| `urldecode` | *none* |
|
||||
| `user_agent` | side-loading a custom regex file is not supported; the processor will use the default user agent definitions as specified in [Elasticsearch processor definition](elasticsearch://reference/ingestion-tools/enrich-processor/user-agent-processor.md) |
|
||||
| Redact | `redact` | *none* |
|
||||
| GeoIp | `geoip` | requires MaxMind GeoIP2 databases, which may be provided by Logstash’s Geoip Database Management *OR* configured using [`geoip_database_directory`](#plugins-filters-elastic_integration-geoip_database_directory) |
|
||||
|
||||
### Field Mappings [plugins-filters-elastic_integration-field_mappings]
|
||||
|
||||
During execution the Ingest pipeline works with a temporary mutable *view* of the Logstash event called an ingest document. This view contains all of the as-structured fields from the event with minimal type conversions.
|
||||
|
||||
It also contains additional metadata fields as required by ingest pipeline processors:
|
||||
|
||||
* `_version`: a `long`-value integer equivalent to the event’s `@version`, or a sensible default value of `1`.
|
||||
* `_ingest.timestamp`: a `ZonedDateTime` equivalent to the event’s `@timestamp` field
|
||||
|
||||
After execution completes the event is sanitized to ensure that Logstash-reserved fields have the expected shape, providing sensible defaults for any missing required fields. When an ingest pipeline has set a reserved field to a value that cannot be coerced, the value is made available in an alternate location on the event as described below.
|
||||
|
||||
| {{ls}} field | type | value |
|
||||
| --- | --- | --- |
|
||||
| `@timestamp` | `Timestamp` | First coercible value of the ingest document’s `@timestamp`, `event.created`, `_ingest.timestamp`, or `_now` fields; or the current timestamp.When the ingest document has a value for `@timestamp` that cannot be coerced, it will be available in the event’s `_@timestamp` field. |
|
||||
| `@version` | String-encoded integer | First coercible value of the ingest document’s `@version`, or `_version` fields; or the current timestamp.When the ingest document has a value for `@version` that cannot be coerced, it will be available in the event’s `_@version` field. |
|
||||
| `@metadata` | key/value map | The ingest document’s `@metadata`; or an empty map.When the ingest document has a value for `@metadata` that cannot be coerced, it will be available in the event’s `_@metadata` field. |
|
||||
| `tags` | a String or a list of Strings | The ingest document’s `tags`.When the ingest document has a value for `tags` that cannot be coerced, it will be available in the event’s `_tags` field. |
|
||||
|
||||
Additionally, these {{es}} IngestDocument Metadata fields are made available on the resulting event *if-and-only-if* they were set during pipeline execution:
|
||||
|
||||
| {{es}} document metadata | {{ls}} field |
|
||||
| --- | --- |
|
||||
| `_id` | `[@metadata][_ingest_document][id]` |
|
||||
| `_index` | `[@metadata][_ingest_document][index]` |
|
||||
| `_routing` | `[@metadata][_ingest_document][routing]` |
|
||||
| `_version` | `[@metadata][_ingest_document][version]` |
|
||||
| `_version_type` | `[@metadata][_ingest_document][version_type]` |
|
||||
| `_ingest.timestamp` | `[@metadata][_ingest_document][timestamp]` |
|
||||
|
||||
|
||||
|
||||
## Resolving Pipeline Definitions [plugins-filters-elastic_integration-resolving]
|
||||
|
||||
This plugin uses {{es}} to resolve pipeline names into their pipeline definitions. When configured *without* an explicit [`pipeline_name`](#plugins-filters-elastic_integration-pipeline_name), or when a pipeline uses the Reroute Processor, it also uses {{es}} to establish mappings of data stream names to their respective default pipeline names.
|
||||
|
||||
It uses hit/miss caches to avoid querying Elasticsearch for every single event. It also works to update these cached mappings *before* they expire. The result is that when {{es}} is responsive this plugin is able to pick up changes quickly without impacting its own performance, and it can survive periods of {{es}} issues without interruption by continuing to use potentially-stale mappings or definitions.
|
||||
|
||||
To achieve this, mappings are cached for a maximum of 24 hours, and cached values are reloaded every 1 minute with the following effect:
|
||||
|
||||
* when a reloaded mapping is non-empty and is the *same* as its already-cached value, its time-to-live is reset to ensure that subsequent events can continue using the confirmed-unchanged value
|
||||
* when a reloaded mapping is non-empty and is *different* from its previously-cached value, the entry is *updated* so that subsequent events will use the new value
|
||||
* when a reloaded mapping is newly *empty*, the previous non-empty mapping is *replaced* with a new empty entry so that subsequent events will use the empty value
|
||||
* when the reload of a mapping *fails*, this plugin emits a log warning but the existing cache entry is unchanged and gets closer to its expiry.
|
||||
|
||||
|
||||
## Elastic Integration Filter Configuration Options [plugins-filters-elastic_integration-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-elastic_integration-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`api_key`](#plugins-filters-elastic_integration-api_key) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`cloud_auth`](#plugins-filters-elastic_integration-cloud_auth) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`cloud_id`](#plugins-filters-elastic_integration-cloud_id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`geoip_database_directory`](#plugins-filters-elastic_integration-geoip_database_directory) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`hosts`](#plugins-filters-elastic_integration-hosts) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`password`](#plugins-filters-elastic_integration-password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`pipeline_name`](#plugins-filters-elastic_integration-pipeline_name) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_certificate`](#plugins-filters-elastic_integration-ssl_certificate) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_certificate_authorities`](#plugins-filters-elastic_integration-ssl_certificate_authorities) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`ssl_enabled`](#plugins-filters-elastic_integration-ssl_enabled) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`ssl_key`](#plugins-filters-elastic_integration-ssl_key) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_keystore_password`](#plugins-filters-elastic_integration-ssl_keystore_password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`ssl_keystore_path`](#plugins-filters-elastic_integration-ssl_keystore_path) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_key_passphrase`](#plugins-filters-elastic_integration-ssl_key_passphrase) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`ssl_truststore_path`](#plugins-filters-elastic_integration-ssl_truststore_path) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_truststore_password`](#plugins-filters-elastic_integration-ssl_truststore_password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`ssl_verification_mode`](#plugins-filters-elastic_integration-ssl_verification_mode) | [string](/reference/configuration-file-structure.md#string), one of `["full", "certificate", "none"]` | No |
|
||||
| [`username`](#plugins-filters-elastic_integration-username) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
### `api_key` [plugins-filters-elastic_integration-api_key]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The encoded form of an API key that is used to authenticate this plugin to {{es}}.
|
||||
|
||||
|
||||
### `cloud_auth` [plugins-filters-elastic_integration-cloud_auth]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Cloud authentication string ("<username>:<password>" format) is an alternative for the `username`/`password` pair and can be obtained from Elastic Cloud web console.
|
||||
|
||||
|
||||
### `cloud_id` [plugins-filters-elastic_integration-cloud_id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
* Cannot be combined with `[`ssl_enabled`](#plugins-filters-elastic_integration-ssl_enabled)⇒false`.
|
||||
|
||||
Cloud Id, from the Elastic Cloud web console.
|
||||
|
||||
When connecting with a Cloud Id, communication to {{es}} is secured with SSL.
|
||||
|
||||
For more details, check out the [Logstash-to-Cloud documentation](/reference/connecting-to-cloud.md).
|
||||
|
||||
|
||||
### `geoip_database_directory` [plugins-filters-elastic_integration-geoip_database_directory]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
When running in a Logstash process that has Geoip Database Management enabled, integrations that use the Geoip Processor wil use managed Maxmind databases by default. By using managed databases you accept and agree to the [MaxMind EULA](https://www.maxmind.com/en/geolite2/eula).
|
||||
|
||||
You may instead configure this plugin with the path to a local directory containing database files.
|
||||
|
||||
This plugin will discover all regular files with the `.mmdb` suffix in the provided directory, and make each available by its file name to the GeoIp processors in integration pipelines. It expects the files it finds to be in the MaxMind DB format with one of the following database types:
|
||||
|
||||
* `AnonymousIp`
|
||||
* `ASN`
|
||||
* `City`
|
||||
* `Country`
|
||||
* `ConnectionType`
|
||||
* `Domain`
|
||||
* `Enterprise`
|
||||
* `Isp`
|
||||
|
||||
::::{note}
|
||||
Most integrations rely on databases being present named *exactly*:
|
||||
|
||||
* `GeoLite2-ASN.mmdb`,
|
||||
* `GeoLite2-City.mmdb`, or
|
||||
* `GeoLite2-Country.mmdb`
|
||||
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `hosts` [plugins-filters-elastic_integration-hosts]
|
||||
|
||||
* Value type is a list of [uri](/reference/configuration-file-structure.md#uri)s
|
||||
* There is no default value for this setting.
|
||||
* Constraints:
|
||||
|
||||
* When any URL contains a protocol component, all URLs must have the same protocol as each other.
|
||||
* `https`-protocol hosts use HTTPS and cannot be combined with [`ssl_enabled => false`](#plugins-filters-elastic_integration-ssl_enabled).
|
||||
* `http`-protocol hosts use unsecured HTTP and cannot be combined with [`ssl_enabled => true`](#plugins-filters-elastic_integration-ssl_enabled).
|
||||
* When any URL omits a port component, the default `9200` is used.
|
||||
* When any URL contains a path component, all URLs must have the same path as each other.
|
||||
|
||||
|
||||
A non-empty list of {{es}} hosts to connect.
|
||||
|
||||
Examples:
|
||||
|
||||
* `"127.0.0.1"`
|
||||
* `["127.0.0.1:9200","127.0.0.2:9200"]`
|
||||
* `["http://127.0.0.1"]`
|
||||
* `["https://127.0.0.1:9200"]`
|
||||
* `["https://127.0.0.1:9200/subpath"]` (If using a proxy on a subpath)
|
||||
|
||||
When connecting with a list of hosts, communication to {{es}} is secured with SSL unless configured otherwise.
|
||||
|
||||
::::{admonition} Disabling SSL is dangerous
|
||||
:class: warning
|
||||
|
||||
The security of this plugin relies on SSL to avoid leaking credentials and to avoid running illegitimate ingest pipeline definitions.
|
||||
|
||||
There are two ways to disable SSL:
|
||||
|
||||
* Provide a list of `http`-protocol hosts
|
||||
* Set `<<plugins-{{type}}s-{{plugin}}-ssl_enabled>>=>false`
|
||||
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `password` [plugins-filters-elastic_integration-password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
* Required when request auth is configured with [`username`](#plugins-filters-elastic_integration-username)
|
||||
|
||||
A password when using HTTP Basic Authentication to connect to {{es}}.
|
||||
|
||||
|
||||
### `pipeline_name` [plugins-filters-elastic_integration-pipeline_name]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
* When present, the event’s initial pipeline will *not* be auto-detected from the event’s data stream fields.
|
||||
* Value may be a [sprintf-style](/reference/event-dependent-configuration.md#sprintf) template; if any referenced fields cannot be resolved the event will not be routed to an ingest pipeline.
|
||||
|
||||
|
||||
### `ssl_certificate` [plugins-filters-elastic_integration-ssl_certificate]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
* When present, [`ssl_key`](#plugins-filters-elastic_integration-ssl_key) and [`ssl_key_passphrase`](#plugins-filters-elastic_integration-ssl_key_passphrase) are also required.
|
||||
* Cannot be combined with configurations that disable SSL
|
||||
|
||||
Path to a PEM-encoded certificate or certificate chain with which to identify this plugin to {{es}}.
|
||||
|
||||
|
||||
### `ssl_certificate_authorities` [plugins-filters-elastic_integration-ssl_certificate_authorities]
|
||||
|
||||
* Value type is a list of [path](/reference/configuration-file-structure.md#path)s
|
||||
* There is no default value for this setting.
|
||||
* Cannot be combined with configurations that disable SSL
|
||||
* Cannot be combined with `[`ssl_verification_mode`](#plugins-filters-elastic_integration-ssl_verification_mode)⇒none`.
|
||||
|
||||
One or more PEM-formatted files defining certificate authorities.
|
||||
|
||||
This setting can be used to *override* the system trust store for verifying the SSL certificate presented by {{es}}.
|
||||
|
||||
|
||||
### `ssl_enabled` [plugins-filters-elastic_integration-ssl_enabled]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Secure SSL communication to {{es}} is enabled unless:
|
||||
|
||||
* it is explicitly disabled with `ssl_enabled => false`; OR
|
||||
* it is implicitly disabled by providing `http`-protocol [`hosts`](#plugins-filters-elastic_integration-hosts).
|
||||
|
||||
Specifying `ssl_enabled => true` can be a helpful redundant safeguard to ensure this plugin cannot be configured to use non-ssl communication.
|
||||
|
||||
|
||||
### `ssl_key` [plugins-filters-elastic_integration-ssl_key]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
* Required when connection identity is configured with [`ssl_certificate`](#plugins-filters-elastic_integration-ssl_certificate)
|
||||
* Cannot be combined with configurations that disable SSL
|
||||
|
||||
A path to a PKCS8-formatted SSL certificate key.
|
||||
|
||||
|
||||
### `ssl_keystore_password` [plugins-filters-elastic_integration-ssl_keystore_password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
* Required when connection identity is configured with [`ssl_keystore_path`](#plugins-filters-elastic_integration-ssl_keystore_path)
|
||||
* Cannot be combined with configurations that disable SSL
|
||||
|
||||
Password for the [`ssl_keystore_path`](#plugins-filters-elastic_integration-ssl_keystore_path).
|
||||
|
||||
|
||||
### `ssl_keystore_path` [plugins-filters-elastic_integration-ssl_keystore_path]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
* When present, [`ssl_keystore_password`](#plugins-filters-elastic_integration-ssl_keystore_password) is also required.
|
||||
* Cannot be combined with configurations that disable SSL
|
||||
|
||||
A path to a JKS- or PKCS12-formatted keystore with which to identify this plugin to {{es}}.
|
||||
|
||||
|
||||
### `ssl_key_passphrase` [plugins-filters-elastic_integration-ssl_key_passphrase]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
* Required when connection identity is configured with [`ssl_certificate`](#plugins-filters-elastic_integration-ssl_certificate)
|
||||
* Cannot be combined with configurations that disable SSL
|
||||
|
||||
A password or passphrase of the [`ssl_key`](#plugins-filters-elastic_integration-ssl_key).
|
||||
|
||||
|
||||
### `ssl_truststore_path` [plugins-filters-elastic_integration-ssl_truststore_path]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
* When present, [`ssl_truststore_password`](#plugins-filters-elastic_integration-ssl_truststore_password) is required.
|
||||
* Cannot be combined with configurations that disable SSL
|
||||
* Cannot be combined with `[`ssl_verification_mode`](#plugins-filters-elastic_integration-ssl_verification_mode)⇒none`.
|
||||
|
||||
A path to JKS- or PKCS12-formatted keystore where trusted certificates are located.
|
||||
|
||||
This setting can be used to *override* the system trust store for verifying the SSL certificate presented by {{es}}.
|
||||
|
||||
|
||||
### `ssl_truststore_password` [plugins-filters-elastic_integration-ssl_truststore_password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
* Required when connection trust is configured with [`ssl_truststore_path`](#plugins-filters-elastic_integration-ssl_truststore_path)
|
||||
* Cannot be combined with configurations that disable SSL
|
||||
|
||||
Password for the [`ssl_truststore_path`](#plugins-filters-elastic_integration-ssl_truststore_path).
|
||||
|
||||
|
||||
### `ssl_verification_mode` [plugins-filters-elastic_integration-ssl_verification_mode]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
* Cannot be combined with configurations that disable SSL
|
||||
|
||||
Level of verification of the certificate provided by {{es}}.
|
||||
|
||||
SSL certificates presented by {{es}} are fully-validated by default.
|
||||
|
||||
* Available modes:
|
||||
|
||||
* `none`: performs no validation, implicitly trusting any server that this plugin connects to (insecure)
|
||||
* `certificate`: validates the server-provided certificate is signed by a trusted certificate authority and that the server can prove possession of its associated private key (less secure)
|
||||
* `full` (default): performs the same validations as `certificate` and also verifies that the provided certificate has an identity claim matching the server we are attempting to connect to (most secure)
|
||||
|
||||
|
||||
|
||||
### `username` [plugins-filters-elastic_integration-username]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
* When present, [`password`](#plugins-filters-elastic_integration-password) is also required.
|
||||
|
||||
A user name when using HTTP Basic Authentication to connect to {{es}}.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-elastic_integration-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-elastic_integration-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-elastic_integration-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-elastic_integration-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-elastic_integration-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-elastic_integration-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-elastic_integration-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-elastic_integration-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-elastic_integration-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elastic_integration {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
elastic_integration {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-elastic_integration-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elastic_integration {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
elastic_integration {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-elastic_integration-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-elastic_integration-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 elastic_integration filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
elastic_integration {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-elastic_integration-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-elastic_integration-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elastic_integration {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
elastic_integration {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-elastic_integration-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elastic_integration {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
elastic_integration {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,679 +0,0 @@
|
|||
---
|
||||
navigation_title: "elasticsearch"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html
|
||||
---
|
||||
|
||||
# Elasticsearch filter plugin [plugins-filters-elasticsearch]
|
||||
|
||||
|
||||
* Plugin version: v4.0.0
|
||||
* Released on: 2025-01-10
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-elasticsearch/blob/v4.0.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-elasticsearch-index.md).
|
||||
|
||||
## Getting help [_getting_help_138]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-elasticsearch). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_137]
|
||||
|
||||
Search Elasticsearch for a previous log event and copy some fields from it into the current event. Below are two complete examples of how this filter might be used.
|
||||
|
||||
The first example uses the legacy *query* parameter where the user is limited to an Elasticsearch query_string. Whenever logstash receives an "end" event, it uses this elasticsearch filter to find the matching "start" event based on some operation identifier. Then it copies the `@timestamp` field from the "start" event into a new field on the "end" event. Finally, using a combination of the "date" filter and the "ruby" filter, we calculate the time duration in hours between the two events.
|
||||
|
||||
```ruby
|
||||
if [type] == "end" {
|
||||
elasticsearch {
|
||||
hosts => ["es-server"]
|
||||
query => "type:start AND operation:%{[opid]}"
|
||||
fields => { "@timestamp" => "started" }
|
||||
}
|
||||
|
||||
date {
|
||||
match => ["[started]", "ISO8601"]
|
||||
target => "[started]"
|
||||
}
|
||||
|
||||
ruby {
|
||||
code => "event.set('duration_hrs', (event.get('@timestamp') - event.get('started')) / 3600)"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The example below reproduces the above example but utilises the query_template. This query_template represents a full Elasticsearch query DSL and supports the standard Logstash field substitution syntax. The example below issues the same query as the first example but uses the template shown.
|
||||
|
||||
```ruby
|
||||
if [type] == "end" {
|
||||
elasticsearch {
|
||||
hosts => ["es-server"]
|
||||
query_template => "template.json"
|
||||
fields => { "@timestamp" => "started" }
|
||||
}
|
||||
|
||||
date {
|
||||
match => ["[started]", "ISO8601"]
|
||||
target => "[started]"
|
||||
}
|
||||
|
||||
ruby {
|
||||
code => "event.set('duration_hrs', (event.get('@timestamp') - event.get('started')) / 3600)"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
template.json:
|
||||
|
||||
```json
|
||||
{
|
||||
"size": 1,
|
||||
"sort" : [ { "@timestamp" : "desc" } ],
|
||||
"query": {
|
||||
"query_string": {
|
||||
"query": "type:start AND operation:%{[opid]}"
|
||||
}
|
||||
},
|
||||
"_source": ["@timestamp"]
|
||||
}
|
||||
```
|
||||
|
||||
As illustrated above, through the use of *opid*, fields from the Logstash events can be referenced within the template. The template will be populated per event prior to being used to query Elasticsearch.
|
||||
|
||||
Notice also that when you use `query_template`, the Logstash attributes `result_size` and `sort` will be ignored. They should be specified directly in the JSON template, as shown in the example above.
|
||||
|
||||
|
||||
## Authentication [plugins-filters-elasticsearch-auth]
|
||||
|
||||
Authentication to a secure Elasticsearch cluster is possible using *one* of the following options:
|
||||
|
||||
* [`user`](#plugins-filters-elasticsearch-user) AND [`password`](#plugins-filters-elasticsearch-password)
|
||||
* [`cloud_auth`](#plugins-filters-elasticsearch-cloud_auth)
|
||||
* [`api_key`](#plugins-filters-elasticsearch-api_key)
|
||||
* [`ssl_keystore_path`](#plugins-filters-elasticsearch-ssl_keystore_path) and/or [`ssl_keystore_password`](#plugins-filters-elasticsearch-ssl_keystore_password)
|
||||
|
||||
|
||||
## Authorization [plugins-filters-elasticsearch-autz]
|
||||
|
||||
Authorization to a secure Elasticsearch cluster requires `read` permission at index level and `monitoring` permissions at cluster level. The `monitoring` permission at cluster level is necessary to perform periodic connectivity checks.
|
||||
|
||||
|
||||
## Elasticsearch Filter Configuration Options [plugins-filters-elasticsearch-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-elasticsearch-common-options) described later.
|
||||
|
||||
::::{note}
|
||||
As of version `4.0.0` of this plugin, a number of previously deprecated settings related to SSL have been removed. Please see the [Elasticsearch Filter Obsolete Configuration Options](#plugins-filters-elasticsearch-obsolete-options) for more details.
|
||||
::::
|
||||
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`aggregation_fields`](#plugins-filters-elasticsearch-aggregation_fields) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`api_key`](#plugins-filters-elasticsearch-api_key) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`ca_trusted_fingerprint`](#plugins-filters-elasticsearch-ca_trusted_fingerprint) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`cloud_auth`](#plugins-filters-elasticsearch-cloud_auth) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`cloud_id`](#plugins-filters-elasticsearch-cloud_id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`docinfo_fields`](#plugins-filters-elasticsearch-docinfo_fields) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`enable_sort`](#plugins-filters-elasticsearch-enable_sort) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`fields`](#plugins-filters-elasticsearch-fields) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`hosts`](#plugins-filters-elasticsearch-hosts) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`index`](#plugins-filters-elasticsearch-index) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`password`](#plugins-filters-elasticsearch-password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`proxy`](#plugins-filters-elasticsearch-proxy) | [uri](/reference/configuration-file-structure.md#uri) | No |
|
||||
| [`query`](#plugins-filters-elasticsearch-query) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`query_template`](#plugins-filters-elasticsearch-query_template) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`result_size`](#plugins-filters-elasticsearch-result_size) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`retry_on_failure`](#plugins-filters-elasticsearch-retry_on_failure) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`retry_on_status`](#plugins-filters-elasticsearch-retry_on_status) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`sort`](#plugins-filters-elasticsearch-sort) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_certificate`](#plugins-filters-elasticsearch-ssl_certificate) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_certificate_authorities`](#plugins-filters-elasticsearch-ssl_certificate_authorities) | list of [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_cipher_suites`](#plugins-filters-elasticsearch-ssl_cipher_suites) | list of [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_enabled`](#plugins-filters-elasticsearch-ssl_enabled) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`ssl_key`](#plugins-filters-elasticsearch-ssl_key) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_keystore_password`](#plugins-filters-elasticsearch-ssl_keystore_password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`ssl_keystore_path`](#plugins-filters-elasticsearch-ssl_keystore_path) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_keystore_type`](#plugins-filters-elasticsearch-ssl_keystore_type) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_supported_protocols`](#plugins-filters-elasticsearch-ssl_supported_protocols) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_truststore_password`](#plugins-filters-elasticsearch-ssl_truststore_password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`ssl_truststore_path`](#plugins-filters-elasticsearch-ssl_truststore_path) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_truststore_type`](#plugins-filters-elasticsearch-ssl_truststore_type) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_verification_mode`](#plugins-filters-elasticsearch-ssl_verification_mode) | [string](/reference/configuration-file-structure.md#string), one of `["full", "none"]` | No |
|
||||
| [`tag_on_failure`](#plugins-filters-elasticsearch-tag_on_failure) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`user`](#plugins-filters-elasticsearch-user) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-elasticsearch-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `aggregation_fields` [plugins-filters-elasticsearch-aggregation_fields]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
Hash of aggregation names to copy from elasticsearch response into Logstash event fields
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
elasticsearch {
|
||||
aggregation_fields => {
|
||||
"my_agg_name" => "my_ls_field"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `api_key` [plugins-filters-elasticsearch-api_key]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Authenticate using Elasticsearch API key. Note that this option also requires enabling the [`ssl_enabled`](#plugins-filters-elasticsearch-ssl_enabled) option.
|
||||
|
||||
Format is `id:api_key` where `id` and `api_key` are as returned by the Elasticsearch [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key).
|
||||
|
||||
|
||||
### `ca_trusted_fingerprint` [plugins-filters-elasticsearch-ca_trusted_fingerprint]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string), and must contain exactly 64 hexadecimal characters.
|
||||
* There is no default value for this setting.
|
||||
* Use of this option *requires* Logstash 8.3+
|
||||
|
||||
The SHA-256 fingerprint of an SSL Certificate Authority to trust, such as the autogenerated self-signed CA for an Elasticsearch cluster.
|
||||
|
||||
|
||||
### `cloud_auth` [plugins-filters-elasticsearch-cloud_auth]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Cloud authentication string ("<username>:<password>" format) is an alternative for the `user`/`password` pair.
|
||||
|
||||
For more info, check out the [Logstash-to-Cloud documentation](/reference/connecting-to-cloud.md).
|
||||
|
||||
|
||||
### `cloud_id` [plugins-filters-elasticsearch-cloud_id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Cloud ID, from the Elastic Cloud web console. If set `hosts` should not be used.
|
||||
|
||||
For more info, check out the [Logstash-to-Cloud documentation](/reference/connecting-to-cloud.md).
|
||||
|
||||
|
||||
### `docinfo_fields` [plugins-filters-elasticsearch-docinfo_fields]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
Hash of docinfo fields to copy from old event (found via elasticsearch) into new event
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
elasticsearch {
|
||||
docinfo_fields => {
|
||||
"_id" => "document_id"
|
||||
"_index" => "document_index"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `enable_sort` [plugins-filters-elasticsearch-enable_sort]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Whether results should be sorted or not
|
||||
|
||||
|
||||
### `fields` [plugins-filters-elasticsearch-fields]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `{}`
|
||||
|
||||
An array of fields to copy from the old event (found via elasticsearch) into the new event, currently being processed.
|
||||
|
||||
In the following example, the values of `@timestamp` and `event_id` on the event found via elasticsearch are copied to the current event’s `started` and `start_id` fields, respectively:
|
||||
|
||||
```ruby
|
||||
fields => {
|
||||
"@timestamp" => "started"
|
||||
"event_id" => "start_id"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `hosts` [plugins-filters-elasticsearch-hosts]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["localhost:9200"]`
|
||||
|
||||
List of elasticsearch hosts to use for querying.
|
||||
|
||||
|
||||
### `index` [plugins-filters-elasticsearch-index]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `""`
|
||||
|
||||
Comma-delimited list of index names to search; use `_all` or empty string to perform the operation on all indices. Field substitution (e.g. `index-name-%{{date_field}}`) is available
|
||||
|
||||
|
||||
### `password` [plugins-filters-elasticsearch-password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Basic Auth - password
|
||||
|
||||
|
||||
### `proxy` [plugins-filters-elasticsearch-proxy]
|
||||
|
||||
* Value type is [uri](/reference/configuration-file-structure.md#uri)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the address of a forward HTTP proxy. An empty string is treated as if proxy was not set, and is useful when using environment variables e.g. `proxy => '${LS_PROXY:}'`.
|
||||
|
||||
|
||||
### `query` [plugins-filters-elasticsearch-query]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Elasticsearch query string. More information is available in the [Elasticsearch query string documentation](elasticsearch://reference/query-languages/query-dsl-query-string-query.md#query-string-syntax). Use either `query` or `query_template`.
|
||||
|
||||
|
||||
### `query_template` [plugins-filters-elasticsearch-query_template]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
File path to elasticsearch query in DSL format. More information is available in the [Elasticsearch query documentation](elasticsearch://reference/query-languages/querydsl.md). Use either `query` or `query_template`.
|
||||
|
||||
|
||||
### `result_size` [plugins-filters-elasticsearch-result_size]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `1`
|
||||
|
||||
How many results to return
|
||||
|
||||
|
||||
### `retry_on_failure` [plugins-filters-elasticsearch-retry_on_failure]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `0` (retries disabled)
|
||||
|
||||
How many times to retry an individual failed request.
|
||||
|
||||
When enabled, retry requests that result in connection errors or an HTTP status code included in [`retry_on_status`](#plugins-filters-elasticsearch-retry_on_status)
|
||||
|
||||
|
||||
### `retry_on_status` [plugins-filters-elasticsearch-retry_on_status]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is an empty list `[]`
|
||||
|
||||
Which HTTP Status codes to consider for retries (in addition to connection errors) when using [`retry_on_failure`](#plugins-filters-elasticsearch-retry_on_failure),
|
||||
|
||||
|
||||
### `sort` [plugins-filters-elasticsearch-sort]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"@timestamp:desc"`
|
||||
|
||||
Comma-delimited list of `<field>:<direction>` pairs that define the sort order
|
||||
|
||||
|
||||
### `ssl_certificate` [plugins-filters-elasticsearch-ssl_certificate]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
SSL certificate to use to authenticate the client. This certificate should be an OpenSSL-style X.509 certificate file.
|
||||
|
||||
::::{note}
|
||||
This setting can be used only if [`ssl_key`](#plugins-filters-elasticsearch-ssl_key) is set.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `ssl_certificate_authorities` [plugins-filters-elasticsearch-ssl_certificate_authorities]
|
||||
|
||||
* Value type is a list of [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting
|
||||
|
||||
The .cer or .pem files to validate the server’s certificate.
|
||||
|
||||
::::{note}
|
||||
You cannot use this setting and [`ssl_truststore_path`](#plugins-filters-elasticsearch-ssl_truststore_path) at the same time.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `ssl_cipher_suites` [plugins-filters-elasticsearch-ssl_cipher_suites]
|
||||
|
||||
* Value type is a list of [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting
|
||||
|
||||
The list of cipher suites to use, listed by priorities. Supported cipher suites vary depending on the Java and protocol versions.
|
||||
|
||||
|
||||
### `ssl_enabled` [plugins-filters-elasticsearch-ssl_enabled]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Enable SSL/TLS secured communication to Elasticsearch cluster. Leaving this unspecified will use whatever scheme is specified in the URLs listed in [`hosts`](#plugins-filters-elasticsearch-hosts) or extracted from the [`cloud_id`](#plugins-filters-elasticsearch-cloud_id). If no explicit protocol is specified plain HTTP will be used.
|
||||
|
||||
|
||||
### `ssl_key` [plugins-filters-elasticsearch-ssl_key]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
OpenSSL-style RSA private key that corresponds to the [`ssl_certificate`](#plugins-filters-elasticsearch-ssl_certificate).
|
||||
|
||||
::::{note}
|
||||
This setting can be used only if [`ssl_certificate`](#plugins-filters-elasticsearch-ssl_certificate) is set.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `ssl_keystore_password` [plugins-filters-elasticsearch-ssl_keystore_password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the keystore password
|
||||
|
||||
|
||||
### `ssl_keystore_path` [plugins-filters-elasticsearch-ssl_keystore_path]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The keystore used to present a certificate to the server. It can be either `.jks` or `.p12`
|
||||
|
||||
::::{note}
|
||||
You cannot use this setting and [`ssl_certificate`](#plugins-filters-elasticsearch-ssl_certificate) at the same time.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `ssl_keystore_type` [plugins-filters-elasticsearch-ssl_keystore_type]
|
||||
|
||||
* Value can be any of: `jks`, `pkcs12`
|
||||
* If not provided, the value will be inferred from the keystore filename.
|
||||
|
||||
The format of the keystore file. It must be either `jks` or `pkcs12`.
|
||||
|
||||
|
||||
### `ssl_supported_protocols` [plugins-filters-elasticsearch-ssl_supported_protocols]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Allowed values are: `'TLSv1.1'`, `'TLSv1.2'`, `'TLSv1.3'`
|
||||
* Default depends on the JDK being used. With up-to-date Logstash, the default is `['TLSv1.2', 'TLSv1.3']`. `'TLSv1.1'` is not considered secure and is only provided for legacy applications.
|
||||
|
||||
List of allowed SSL/TLS versions to use when establishing a connection to the Elasticsearch cluster.
|
||||
|
||||
For Java 8 `'TLSv1.3'` is supported only since **8u262** (AdoptOpenJDK), but requires that you set the `LS_JAVA_OPTS="-Djdk.tls.client.protocols=TLSv1.3"` system property in Logstash.
|
||||
|
||||
::::{note}
|
||||
If you configure the plugin to use `'TLSv1.1'` on any recent JVM, such as the one packaged with Logstash, the protocol is disabled by default and needs to be enabled manually by changing `jdk.tls.disabledAlgorithms` in the **$JDK_HOME/conf/security/java.security** configuration file. That is, `TLSv1.1` needs to be removed from the list.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `ssl_truststore_password` [plugins-filters-elasticsearch-ssl_truststore_password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the truststore password
|
||||
|
||||
|
||||
### `ssl_truststore_path` [plugins-filters-elasticsearch-ssl_truststore_path]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The truststore to validate the server’s certificate. It can be either `.jks` or `.p12`.
|
||||
|
||||
::::{note}
|
||||
You cannot use this setting and [`ssl_certificate_authorities`](#plugins-filters-elasticsearch-ssl_certificate_authorities) at the same time.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `ssl_truststore_type` [plugins-filters-elasticsearch-ssl_truststore_type]
|
||||
|
||||
* Value can be any of: `jks`, `pkcs12`
|
||||
* If not provided, the value will be inferred from the truststore filename.
|
||||
|
||||
The format of the truststore file. It must be either `jks` or `pkcs12`.
|
||||
|
||||
|
||||
### `ssl_verification_mode` [plugins-filters-elasticsearch-ssl_verification_mode]
|
||||
|
||||
* Value can be any of: `full`, `none`
|
||||
* Default value is `full`
|
||||
|
||||
Defines how to verify the certificates presented by another party in the TLS connection:
|
||||
|
||||
`full` validates that the server certificate has an issue date that’s within the not_before and not_after dates; chains to a trusted Certificate Authority (CA), and has a hostname or IP address that matches the names within the certificate.
|
||||
|
||||
`none` performs no certificate validation.
|
||||
|
||||
::::{warning}
|
||||
Setting certificate verification to `none` disables many security benefits of SSL/TLS, which is very dangerous. For more information on disabling certificate verification please read [https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf](https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf)
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-elasticsearch-tag_on_failure]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_elasticsearch_lookup_failure"]`
|
||||
|
||||
Tags the event on failure to look up previous log event information. This can be used in later analysis.
|
||||
|
||||
|
||||
### `user` [plugins-filters-elasticsearch-user]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Basic Auth - username
|
||||
|
||||
|
||||
|
||||
## Elasticsearch Filter Obsolete Configuration Options [plugins-filters-elasticsearch-obsolete-options]
|
||||
|
||||
::::{warning}
|
||||
As of version `4.0.0` of this plugin, some configuration options have been replaced. The plugin will fail to start if it contains any of these obsolete options.
|
||||
::::
|
||||
|
||||
|
||||
| Setting | Replaced by | ca_file |
|
||||
| --- | --- | --- |
|
||||
| [`ssl_certificate_authorities`](#plugins-filters-elasticsearch-ssl_certificate_authorities) | keystore | [`ssl_keystore_path`](#plugins-filters-elasticsearch-ssl_keystore_path) |
|
||||
| keystore_password | [`ssl_keystore_password`](#plugins-filters-elasticsearch-ssl_keystore_password) | ssl |
|
||||
|
||||
|
||||
## Common options [plugins-filters-elasticsearch-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-elasticsearch-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-elasticsearch-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-elasticsearch-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-elasticsearch-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-elasticsearch-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-elasticsearch-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-elasticsearch-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-elasticsearch-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elasticsearch {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
elasticsearch {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-elasticsearch-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elasticsearch {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
elasticsearch {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-elasticsearch-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-elasticsearch-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 elasticsearch filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
elasticsearch {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-elasticsearch-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-elasticsearch-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elasticsearch {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
elasticsearch {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-elasticsearch-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
elasticsearch {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
elasticsearch {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
|
@ -1,246 +0,0 @@
|
|||
---
|
||||
navigation_title: "environment"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-environment.html
|
||||
---
|
||||
|
||||
# Environment filter plugin [plugins-filters-environment]
|
||||
|
||||
|
||||
* Plugin version: v3.0.3
|
||||
* Released on: 2017-11-07
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-environment/blob/v3.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-environment-index.md).
|
||||
|
||||
## Installation [_installation_59]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-environment`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_139]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-environment). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_138]
|
||||
|
||||
This filter stores environment variables as subfields in the `@metadata` field. You can then use these values in other parts of the pipeline.
|
||||
|
||||
Adding environment variables is as easy as: filter { environment { add_metadata_from_env ⇒ { "field_name" ⇒ "ENV_VAR_NAME" } } }
|
||||
|
||||
Accessing stored environment variables is now done through the `@metadata` field:
|
||||
|
||||
```
|
||||
["@metadata"]["field_name"]
|
||||
```
|
||||
This would reference field `field_name`, which in the above example references the `ENV_VAR_NAME` environment variable.
|
||||
|
||||
::::{important}
|
||||
Previous versions of this plugin put the environment variables as fields at the root level of the event. Current versions make use of the `@metadata` field, as outlined. You have to change `add_field_from_env` in the older versions to `add_metadata_from_env` in the newer version.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Environment Filter Configuration Options [plugins-filters-environment-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-environment-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_metadata_from_env`](#plugins-filters-environment-add_metadata_from_env) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-environment-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `add_metadata_from_env` [plugins-filters-environment-add_metadata_from_env]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
Specify a hash of field names and the environment variable name with the value you want imported into Logstash. For example:
|
||||
|
||||
```
|
||||
add_metadata_from_env => { "field_name" => "ENV_VAR_NAME" }
|
||||
```
|
||||
or
|
||||
|
||||
```
|
||||
add_metadata_from_env => {
|
||||
"field1" => "ENV1"
|
||||
"field2" => "ENV2"
|
||||
# "field_n" => "ENV_n"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Common options [plugins-filters-environment-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-environment-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-environment-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-environment-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-environment-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-environment-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-environment-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-environment-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-environment-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
environment {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
environment {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-environment-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
environment {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
environment {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-environment-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-environment-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 environment filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
environment {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-environment-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-environment-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
environment {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
environment {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-environment-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
environment {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
environment {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,226 +0,0 @@
|
|||
---
|
||||
navigation_title: "extractnumbers"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-extractnumbers.html
|
||||
---
|
||||
|
||||
# Extractnumbers filter plugin [plugins-filters-extractnumbers]
|
||||
|
||||
|
||||
* Plugin version: v3.0.3
|
||||
* Released on: 2017-11-07
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-extractnumbers/blob/v3.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-extractnumbers-index.md).
|
||||
|
||||
## Installation [_installation_60]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-extractnumbers`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_140]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-extractnumbers). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_139]
|
||||
|
||||
This filter automatically extracts all numbers found inside a string
|
||||
|
||||
This is useful when you have lines that don’t match a grok pattern or use json but you still need to extract numbers.
|
||||
|
||||
Each numbers is returned in a `@fields.intX` or `@fields.floatX` field where X indicates the position in the string.
|
||||
|
||||
The fields produced by this filter are extra useful used in combination with kibana number plotting features.
|
||||
|
||||
|
||||
## Extractnumbers Filter Configuration Options [plugins-filters-extractnumbers-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-extractnumbers-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`source`](#plugins-filters-extractnumbers-source) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-extractnumbers-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `source` [plugins-filters-extractnumbers-source]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"message"`
|
||||
|
||||
The source field for the data. By default is message.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-extractnumbers-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-extractnumbers-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-extractnumbers-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-extractnumbers-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-extractnumbers-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-extractnumbers-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-extractnumbers-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-extractnumbers-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-extractnumbers-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
extractnumbers {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
extractnumbers {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-extractnumbers-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
extractnumbers {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
extractnumbers {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-extractnumbers-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-extractnumbers-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 extractnumbers filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
extractnumbers {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-extractnumbers-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-extractnumbers-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
extractnumbers {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
extractnumbers {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-extractnumbers-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
extractnumbers {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
extractnumbers {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,351 +0,0 @@
|
|||
---
|
||||
navigation_title: "fingerprint"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-fingerprint.html
|
||||
---
|
||||
|
||||
# Fingerprint filter plugin [plugins-filters-fingerprint]
|
||||
|
||||
|
||||
* Plugin version: v3.4.4
|
||||
* Released on: 2024-03-19
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-fingerprint/blob/v3.4.4/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-fingerprint-index.md).
|
||||
|
||||
## Getting help [_getting_help_141]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-fingerprint). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_140]
|
||||
|
||||
Create consistent hashes (fingerprints) of one or more fields and store the result in a new field.
|
||||
|
||||
You can use this plugin to create consistent document ids when events are inserted into Elasticsearch. This approach means that existing documents can be updated instead of creating new documents.
|
||||
|
||||
::::{note}
|
||||
When the `method` option is set to `UUID` the result won’t be a consistent hash but a random [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier). To generate UUIDs, prefer the [uuid filter](/reference/plugins-filters-uuid.md).
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Event Metadata and the Elastic Common Schema (ECS) [plugins-filters-fingerprint-ecs_metadata]
|
||||
|
||||
This plugin adds a hash value to event as an identifier. You can configure the `target` option to change the output field.
|
||||
|
||||
When ECS compatibility is disabled, the hash value is stored in the `fingerprint` field. When ECS is enabled, the value is stored in the `[event][hash]` field.
|
||||
|
||||
Here’s how ECS compatibility mode affects output.
|
||||
|
||||
| ECS disabled | ECS v1 | Availability | Description |
|
||||
| --- | --- | --- | --- |
|
||||
| fingerprint | [event][hash] | *Always* | *a hash value of event* |
|
||||
|
||||
|
||||
## Fingerprint Filter Configuration Options [plugins-filters-fingerprint-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-fingerprint-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`base64encode`](#plugins-filters-fingerprint-base64encode) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`concatenate_sources`](#plugins-filters-fingerprint-concatenate_sources) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`concatenate_all_fields`](#plugins-filters-fingerprint-concatenate_all_fields) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`ecs_compatibility`](#plugins-filters-fingerprint-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`key`](#plugins-filters-fingerprint-key) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`method`](#plugins-filters-fingerprint-method) | [string](/reference/configuration-file-structure.md#string), one of `["SHA1", "SHA256", "SHA384", "SHA512", "MD5", "MURMUR3", "MURMUR3_128", IPV4_NETWORK", "UUID", "PUNCTUATION"]` | Yes |
|
||||
| [`source`](#plugins-filters-fingerprint-source) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`target`](#plugins-filters-fingerprint-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-fingerprint-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `base64encode` [plugins-filters-fingerprint-base64encode]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
When set to `true`, the `SHA1`, `SHA256`, `SHA384`, `SHA512`, `MD5` and `MURMUR3_128` fingerprint methods will produce base64 encoded rather than hex encoded strings.
|
||||
|
||||
|
||||
### `concatenate_sources` [plugins-filters-fingerprint-concatenate_sources]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
When set to `true` and `method` isn’t `UUID` or `PUNCTUATION`, the plugin concatenates the names and values of all fields given in the `source` option into one string (like the old checksum filter) before doing the fingerprint computation.
|
||||
|
||||
If `false` and multiple source fields are given, the target field will be single fingerprint of the last source field.
|
||||
|
||||
**Example: `concatenate_sources`=false**
|
||||
|
||||
This example produces a single fingerprint that is computed from "birthday," the last source field.
|
||||
|
||||
```ruby
|
||||
fingerprint {
|
||||
source => ["user_id", "siblings", "birthday"]
|
||||
}
|
||||
```
|
||||
|
||||
The output is:
|
||||
|
||||
```ruby
|
||||
"fingerprint" => "6b6390a4416131f82b6ffb509f6e779e5dd9630f".
|
||||
```
|
||||
|
||||
**Example: `concatenate_sources`=false with array**
|
||||
|
||||
If the last source field is an array, you get an array of fingerprints.
|
||||
|
||||
In this example, "siblings" is an array ["big brother", "little sister", "little brother"].
|
||||
|
||||
```ruby
|
||||
fingerprint {
|
||||
source => ["user_id", "siblings"]
|
||||
}
|
||||
```
|
||||
|
||||
The output is:
|
||||
|
||||
```ruby
|
||||
"fingerprint" => [
|
||||
[0] "8a8a9323677f4095fcf0c8c30b091a0133b00641",
|
||||
[1] "2ce11b313402e0e9884e094409f8d9fcf01337c2",
|
||||
[2] "adc0b90f9391a82098c7b99e66a816e9619ad0a7"
|
||||
],
|
||||
```
|
||||
|
||||
|
||||
### `concatenate_all_fields` [plugins-filters-fingerprint-concatenate_all_fields]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
When set to `true` and `method` isn’t `UUID` or `PUNCTUATION`, the plugin concatenates the names and values of all fields of the event into one string (like the old checksum filter) before doing the fingerprint computation. If `false` and at least one source field is given, the target field will be an array with fingerprints of the source fields given.
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-filters-fingerprint-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: unstructured data added at root level
|
||||
* `v1`: uses `[event][hash]` fields that are compatible with Elastic Common Schema
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). See [Event Metadata and the Elastic Common Schema (ECS)](#plugins-filters-fingerprint-ecs_metadata) for detailed information.
|
||||
|
||||
|
||||
### `key` [plugins-filters-fingerprint-key]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
When used with the `IPV4_NETWORK` method fill in the subnet prefix length. With other methods, optionally fill in the HMAC key.
|
||||
|
||||
|
||||
### `method` [plugins-filters-fingerprint-method]
|
||||
|
||||
* This is a required setting.
|
||||
* Value can be any of: `SHA1`, `SHA256`, `SHA384`, `SHA512`, `MD5`, `MURMUR3`, `MURMUR3_128`, `IPV4_NETWORK`, `UUID`, `PUNCTUATION`
|
||||
* Default value is `"SHA1"`
|
||||
|
||||
The fingerprint method to use.
|
||||
|
||||
If set to `SHA1`, `SHA256`, `SHA384`, `SHA512`, or `MD5` and a key is set, the corresponding cryptographic hash function and the keyed-hash (HMAC) digest function are used to generate the fingerprint.
|
||||
|
||||
If set to `MURMUR3` or `MURMUR3_128` the non-cryptographic MurmurHash function (either the 32-bit or 128-bit implementation, respectively) will be used.
|
||||
|
||||
If set to `IPV4_NETWORK` the input data needs to be a IPv4 address and the hash value will be the masked-out address using the number of bits specified in the `key` option. For example, with "1.2.3.4" as the input and `key` set to 16, the hash becomes "1.2.0.0".
|
||||
|
||||
If set to `PUNCTUATION`, all non-punctuation characters will be removed from the input string.
|
||||
|
||||
If set to `UUID`, a [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier) will be generated. The result will be random and thus not a consistent hash.
|
||||
|
||||
|
||||
### `source` [plugins-filters-fingerprint-source]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `"message"`
|
||||
|
||||
The name(s) of the source field(s) whose contents will be used to create the fingerprint. If an array is given, see the `concatenate_sources` option.
|
||||
|
||||
|
||||
### `target` [plugins-filters-fingerprint-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"fingerprint"` when ECS is disabled
|
||||
* Default value is `"[event][hash]"` when ECS is enabled
|
||||
|
||||
The name of the field where the generated fingerprint will be stored. Any current contents of that field will be overwritten.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-fingerprint-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-fingerprint-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-fingerprint-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-fingerprint-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-fingerprint-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-fingerprint-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-fingerprint-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-fingerprint-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-fingerprint-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
fingerprint {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
fingerprint {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-fingerprint-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
fingerprint {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
fingerprint {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-fingerprint-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-fingerprint-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 fingerprint filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
fingerprint {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-fingerprint-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-fingerprint-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
fingerprint {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
fingerprint {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-fingerprint-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
fingerprint {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
fingerprint {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
|
@ -1,508 +0,0 @@
|
|||
---
|
||||
navigation_title: "geoip"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html
|
||||
---
|
||||
|
||||
# Geoip filter plugin [plugins-filters-geoip]
|
||||
|
||||
|
||||
* Plugin version: v7.3.1
|
||||
* Released on: 2024-10-11
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-geoip/blob/v7.3.1/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-geoip-index.md).
|
||||
|
||||
## Getting help [_getting_help_142]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-geoip). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_141]
|
||||
|
||||
The GeoIP filter adds information about the geographical location of IP addresses, based on data from the MaxMind GeoLite2 databases.
|
||||
|
||||
|
||||
## Supported Databases [_supported_databases]
|
||||
|
||||
This plugin is bundled with [GeoLite2](https://dev.maxmind.com/geoip/geoip2/geolite2) City database out of the box. From MaxMind’s description — "GeoLite2 databases are free IP geolocation databases comparable to, but less accurate than, MaxMind’s GeoIP2 databases". Please see GeoIP Lite2 license for more details.
|
||||
|
||||
[Commercial databases](https://www.maxmind.com/en/geoip2-databases) from MaxMind are also supported in this plugin.
|
||||
|
||||
If you need to use databases other than the bundled GeoLite2 City, you can download them directly from MaxMind’s website and use the `database` option to specify their location. The GeoLite2 databases can be [downloaded from here](https://dev.maxmind.com/geoip/geoip2/geolite2).
|
||||
|
||||
If you would like to get Autonomous System Number(ASN) information, you can use the GeoLite2-ASN database.
|
||||
|
||||
|
||||
## Database License [plugins-filters-geoip-database_license]
|
||||
|
||||
[MaxMind](https://www.maxmind.com) changed from releasing the GeoIP database under a Creative Commons (CC) license to a proprietary end-user license agreement (EULA). The MaxMind EULA requires Logstash to update the MaxMind database within 30 days of a database update.
|
||||
|
||||
The GeoIP filter plugin can manage the database for users running the Logstash default distribution, or you can manage database updates on your own. The behavior is controlled by the `database` setting and by the auto-update feature. When you use the default `database` setting and the auto-update feature is enabled, Logstash ensures that the plugin is using the latest version of the database. Otherwise, you are responsible for maintaining compliance.
|
||||
|
||||
The Logstash open source distribution uses the MaxMind Creative Commons license database by default.
|
||||
|
||||
|
||||
## Database Auto-update [plugins-filters-geoip-database_auto]
|
||||
|
||||
This plugin bundles Creative Commons (CC) license databases. If the auto-update feature is enabled in `logstash.yml`(as it is by default), Logstash checks for database updates every day. It downloads the latest and can replace the old database while the plugin is running.
|
||||
|
||||
::::{note}
|
||||
If the auto-update feature is disabled or the database has never been updated successfully, as in air-gapped environments, Logstash can use CC license databases indefinitely.
|
||||
::::
|
||||
|
||||
|
||||
After Logstash has switched to a EULA licensed database, the geoip filter will stop enriching events in order to maintain compliance if Logstash fails to check for database updates for 30 days. Events will be tagged with `_geoip_expired_database` tag to facilitate the handling of this situation.
|
||||
|
||||
::::{note}
|
||||
If the auto-update feature is enabled, Logstash upgrades from the CC database license to the EULA version on the first download.
|
||||
::::
|
||||
|
||||
|
||||
::::{tip}
|
||||
When possible, allow Logstash to access the internet to download databases so that they are always up-to-date.
|
||||
::::
|
||||
|
||||
|
||||
**Disable the auto-update feature**
|
||||
|
||||
If you work in air-gapped environment and want to disable the database auto-update feature, set the `xpack.geoip.downloader.enabled` value to `false` in `logstash.yml`.
|
||||
|
||||
When the auto-update feature is disabled, Logstash uses the Creative Commons (CC) license databases indefinitely, and any previously downloaded version of the EULA databases will be deleted.
|
||||
|
||||
|
||||
## Manage your own database updates [plugins-filters-geoip-manage_update]
|
||||
|
||||
**Use an HTTP proxy**
|
||||
|
||||
If you can’t connect directly to the Elastic GeoIP endpoint, consider setting up an HTTP proxy server. You can then specify the proxy with `http_proxy` environment variable.
|
||||
|
||||
```sh
|
||||
export http_proxy="http://PROXY_IP:PROXY_PORT"
|
||||
```
|
||||
|
||||
**Use a custom endpoint (air-gapped environments)**
|
||||
|
||||
If you work in air-gapped environment and can’t update your databases from the Elastic endpoint, You can then download databases from MaxMind and bootstrap the service.
|
||||
|
||||
1. Download both `GeoLite2-ASN.mmdb` and `GeoLite2-City.mmdb` database files from the [MaxMind site](http://dev.maxmind.com/geoip/geoip2/geolite2).
|
||||
2. Copy both database files to a single directory.
|
||||
3. [Download {{es}}](https://www.elastic.co/downloads/elasticsearch).
|
||||
4. From your {{es}} directory, run:
|
||||
|
||||
```sh
|
||||
./bin/elasticsearch-geoip -s my/database/dir
|
||||
```
|
||||
|
||||
5. Serve the static database files from your directory. For example, you can use Docker to serve the files from nginx server:
|
||||
|
||||
```sh
|
||||
docker run -p 8080:80 -v my/database/dir:/usr/share/nginx/html:ro nginx
|
||||
```
|
||||
|
||||
6. Specify the service’s endpoint URL using the `xpack.geoip.download.endpoint=http://localhost:8080/overview.json` setting in `logstash.yml`.
|
||||
|
||||
Logstash gets automatic updates from this service.
|
||||
|
||||
|
||||
## Database Metrics [plugins-filters-geoip-metrics]
|
||||
|
||||
You can monitor database status through the [Node Stats API](https://www.elastic.co/docs/api/doc/logstash/operation/operation-nodestats).
|
||||
|
||||
The following request returns a JSON document containing database manager stats, including:
|
||||
|
||||
* database status and freshness
|
||||
|
||||
* `geoip_download_manager.database.*.status`
|
||||
|
||||
* `init` : initial CC database status
|
||||
* `up_to_date` : using up-to-date EULA database
|
||||
* `to_be_expired` : 25 days without calling service
|
||||
* `expired` : 30 days without calling service
|
||||
|
||||
* `fail_check_in_days` : number of days Logstash fails to call service since the last success
|
||||
|
||||
* info about download successes and failures
|
||||
|
||||
* `geoip_download_manager.download_stats.successes` number of successful checks and downloads
|
||||
* `geoip_download_manager.download_stats.failures` number of failed check or download
|
||||
* `geoip_download_manager.download_stats.status`
|
||||
|
||||
* `updating` : check and download at the moment
|
||||
* `succeeded` : last download succeed
|
||||
* `failed` : last download failed
|
||||
|
||||
|
||||
```js
|
||||
curl -XGET 'localhost:9600/_node/stats/geoip_download_manager?pretty'
|
||||
```
|
||||
|
||||
Example response:
|
||||
|
||||
```js
|
||||
{
|
||||
"geoip_download_manager" : {
|
||||
"database" : {
|
||||
"ASN" : {
|
||||
"status" : "up_to_date",
|
||||
"fail_check_in_days" : 0,
|
||||
"last_updated_at": "2021-06-21T16:06:54+02:00"
|
||||
},
|
||||
"City" : {
|
||||
"status" : "up_to_date",
|
||||
"fail_check_in_days" : 0,
|
||||
"last_updated_at": "2021-06-21T16:06:54+02:00"
|
||||
}
|
||||
},
|
||||
"download_stats" : {
|
||||
"successes" : 15,
|
||||
"failures" : 1,
|
||||
"last_checked_at" : "2021-06-21T16:07:03+02:00",
|
||||
"status" : "succeeded"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Field mapping [plugins-filters-geoip-field-mapping]
|
||||
|
||||
When this plugin is run with [`ecs_compatibility`](#plugins-filters-geoip-ecs_compatibility) disabled, the MaxMind DB’s fields are added directly to the [`target`](#plugins-filters-geoip-target). When ECS compatibility is enabled, the fields are structured to fit into an ECS shape.
|
||||
|
||||
| Database Field Name | ECS Field | Example |
|
||||
| --- | --- | --- |
|
||||
| `ip` | `[ip]` | `12.34.56.78` |
|
||||
| `anonymous` | `[ip_traits][anonymous]` | `false` |
|
||||
| `anonymous_vpn` | `[ip_traits][anonymous_vpn]` | `false` |
|
||||
| `hosting_provider` | `[ip_traits][hosting_provider]` | `true` |
|
||||
| `network` | `[ip_traits][network]` | `12.34.56.78/20` |
|
||||
| `public_proxy` | `[ip_traits][public_proxy]` | `true` |
|
||||
| `residential_proxy` | `[ip_traits][residential_proxy]` | `false` |
|
||||
| `tor_exit_node` | `[ip_traits][tor_exit_node]` | `true` |
|
||||
| `city_name` | `[geo][city_name]` | `Seattle` |
|
||||
| `country_name` | `[geo][country_name]` | `United States` |
|
||||
| `continent_code` | `[geo][continent_code]` | `NA` |
|
||||
| `continent_name` | `[geo][continent_name]` | `North America` |
|
||||
| `country_code2` | `[geo][country_iso_code]` | `US` |
|
||||
| `country_code3` | *N/A* | `US`<br> *maintained for legacy support, but populated with 2-character country code* |
|
||||
| `postal_code` | `[geo][postal_code]` | `98106` |
|
||||
| `region_name` | `[geo][region_name]` | `Washington` |
|
||||
| `region_code` | `[geo][region_code]` | `WA` |
|
||||
| `region_iso_code`* | `[geo][region_iso_code]` | `US-WA` |
|
||||
| `timezone` | `[geo][timezone]` | `America/Los_Angeles` |
|
||||
| `location`* | `[geo][location]` | `{"lat": 47.6062, "lon": -122.3321}"` |
|
||||
| `latitude` | `[geo][location][lat]` | `47.6062` |
|
||||
| `longitude` | `[geo][location][lon]` | `-122.3321` |
|
||||
| `domain` | `[domain]` | `example.com` |
|
||||
| `asn` | `[as][number]` | `98765` |
|
||||
| `as_org` | `[as][organization][name]` | `Elastic, NV` |
|
||||
| `isp` | `[mmdb][isp]` | `InterLink Supra LLC` |
|
||||
| `dma_code` | `[mmdb][dma_code]` | `819` |
|
||||
| `organization` | `[mmdb][organization]` | `Elastic, NV` |
|
||||
|
||||
::::{note}
|
||||
`*` indicates a composite field, which is only populated if GeoIP lookup result contains all components.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Details [_details_2]
|
||||
|
||||
When using a City database, the enrichment is aborted if no latitude/longitude pair is available.
|
||||
|
||||
The `location` field combines the latitude and longitude into a structure called [GeoJSON](https://datatracker.ietf.org/doc/html/rfc7946). When you are using a default [`target`](#plugins-filters-geoip-target), the templates provided by the [elasticsearch output](/reference/plugins-outputs-elasticsearch.md) map the field to an [Elasticsearch Geo_point datatype](elasticsearch://reference/elasticsearch/mapping-reference/geo-point.md).
|
||||
|
||||
As this field is a `geo_point` *and* it is still valid GeoJSON, you get the awesomeness of Elasticsearch’s geospatial query, facet and filter functions and the flexibility of having GeoJSON for all other applications (like Kibana’s map visualization).
|
||||
|
||||
::::{note}
|
||||
This product includes GeoLite2 data created by MaxMind, available from [http://www.maxmind.com](http://www.maxmind.com). This database is licensed under [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
|
||||
|
||||
Versions 4.0.0 and later of the GeoIP filter use the MaxMind GeoLite2 database and support both IPv4 and IPv6 lookups. Versions prior to 4.0.0 use the legacy MaxMind GeoLite database and support IPv4 lookups only.
|
||||
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Geoip Filter Configuration Options [plugins-filters-geoip-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-geoip-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`cache_size`](#plugins-filters-geoip-cache_size) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`database`](#plugins-filters-geoip-database) | a valid filesystem path | No |
|
||||
| [`default_database_type`](#plugins-filters-geoip-default_database_type) | `City` or `ASN` | No |
|
||||
| [`ecs_compatibility`](#plugins-filters-geoip-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`fields`](#plugins-filters-geoip-fields) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`source`](#plugins-filters-geoip-source) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`tag_on_failure`](#plugins-filters-geoip-tag_on_failure) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`target`](#plugins-filters-geoip-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-geoip-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `cache_size` [plugins-filters-geoip-cache_size]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `1000`
|
||||
|
||||
GeoIP lookup is surprisingly expensive. This filter uses an cache to take advantage of the fact that IPs agents are often found adjacent to one another in log files and rarely have a random distribution. The higher you set this the more likely an item is to be in the cache and the faster this filter will run. However, if you set this too high you can use more memory than desired. Since the Geoip API upgraded to v2, there is not any eviction policy so far, if cache is full, no more record can be added. Experiment with different values for this option to find the best performance for your dataset.
|
||||
|
||||
This MUST be set to a value > 0. There is really no reason to not want this behavior, the overhead is minimal and the speed gains are large.
|
||||
|
||||
It is important to note that this config value is global to the geoip_type. That is to say all instances of the geoip filter of the same geoip_type share the same cache. The last declared cache size will *win*. The reason for this is that there would be no benefit to having multiple caches for different instances at different points in the pipeline, that would just increase the number of cache misses and waste memory.
|
||||
|
||||
|
||||
### `database` [plugins-filters-geoip-database]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* If not specified, the database defaults to the `GeoLite2 City` database that ships with Logstash.
|
||||
|
||||
The path to MaxMind’s database file that Logstash should use. The default database is `GeoLite2-City`. This plugin supports several free databases (`GeoLite2-City`, `GeoLite2-Country`, `GeoLite2-ASN`) and a selection of commercially-licensed databases (`GeoIP2-City`, `GeoIP2-ISP`, `GeoIP2-Country`, `GeoIP2-Domain`, `GeoIP2-Enterprise`, `GeoIP2-Anonymous-IP`).
|
||||
|
||||
Database auto-update applies to the default distribution. When `database` points to user’s database path, auto-update is disabled. See [Database License](#plugins-filters-geoip-database_license) for more information.
|
||||
|
||||
|
||||
### `default_database_type` [plugins-filters-geoip-default_database_type]
|
||||
|
||||
This plugin now includes both the GeoLite2-City and GeoLite2-ASN databases. If `database` and `default_database_type` are unset, the GeoLite2-City database will be selected. To use the included GeoLite2-ASN database, set `default_database_type` to `ASN`.
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* The default value is `City`
|
||||
* The only acceptable values are `City` and `ASN`
|
||||
|
||||
|
||||
### `fields` [plugins-filters-geoip-fields]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
An array of geoip fields to be included in the event.
|
||||
|
||||
Possible fields depend on the database type. By default, all geoip fields from the relevant database are included in the event.
|
||||
|
||||
For a complete list of available fields and how they map to an event’s structure, see [field mapping](#plugins-filters-geoip-field-mapping).
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-filters-geoip-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: unstructured geo data added at root level
|
||||
* `v1`, `v8`: use fields that are compatible with Elastic Common Schema. Example: `[client][geo][country_name]`. See [field mapping](#plugins-filters-geoip-field-mapping) for more info.
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`.
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). The value of this setting affects the *default* value of [`target`](#plugins-filters-geoip-target).
|
||||
|
||||
|
||||
### `source` [plugins-filters-geoip-source]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The field containing the IP address or hostname to map via geoip. If this field is an array, only the first value will be used.
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-geoip-tag_on_failure]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_geoip_lookup_failure"]`
|
||||
|
||||
Tags the event on failure to look up geo information. This can be used in later analysis.
|
||||
|
||||
|
||||
### `target` [plugins-filters-geoip-target]
|
||||
|
||||
* This is an optional setting with condition.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value depends on whether [`ecs_compatibility`](#plugins-filters-geoip-ecs_compatibility) is enabled:
|
||||
|
||||
* ECS Compatibility disabled: `geoip`
|
||||
* ECS Compatibility enabled: If `source` is an `ip` sub-field, eg. `[client][ip]`, `target` will automatically set to the parent field, in this example `client`, otherwise, `target` is a required setting
|
||||
|
||||
* `geo` field is nested in `[client][geo]`
|
||||
* ECS compatible values are `client`, `destination`, `host`, `observer`, `server`, `source`
|
||||
|
||||
|
||||
Specify the field into which Logstash should store the geoip data. This can be useful, for example, if you have `src_ip` and `dst_ip` fields and would like the GeoIP information of both IPs.
|
||||
|
||||
If you save the data to a target field other than `geoip` and want to use the `geo_point` related functions in Elasticsearch, you need to alter the template provided with the Elasticsearch output and configure the output to use the new template.
|
||||
|
||||
Even if you don’t use the `geo_point` mapping, the `[target][location]` field is still valid GeoJSON.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-geoip-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-geoip-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-geoip-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-geoip-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-geoip-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-geoip-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-geoip-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-geoip-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-geoip-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
geoip {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
geoip {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-geoip-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
geoip {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
geoip {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-geoip-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-geoip-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 geoip filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
geoip {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-geoip-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-geoip-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
geoip {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
geoip {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-geoip-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
geoip {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
geoip {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
|
@ -1,586 +0,0 @@
|
|||
---
|
||||
navigation_title: "grok"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
|
||||
---
|
||||
|
||||
# Grok filter plugin [plugins-filters-grok]
|
||||
|
||||
|
||||
* Plugin version: v4.4.3
|
||||
* Released on: 2022-10-28
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-grok/blob/v4.4.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-grok-index.md).
|
||||
|
||||
## Getting help [_getting_help_143]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-grok). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_142]
|
||||
|
||||
Parse arbitrary text and structure it.
|
||||
|
||||
Grok is a great way to parse unstructured log data into something structured and queryable.
|
||||
|
||||
This tool is perfect for syslog logs, apache and other webserver logs, mysql logs, and in general, any log format that is generally written for humans and not computer consumption.
|
||||
|
||||
Logstash ships with about 120 patterns by default. You can find them here: [https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns](https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns). You can add your own trivially. (See the `patterns_dir` setting)
|
||||
|
||||
If you need help building patterns to match your logs, you will find the [http://grokdebug.herokuapp.com](http://grokdebug.herokuapp.com) and [http://grokconstructor.appspot.com/](http://grokconstructor.appspot.com/) applications quite useful!
|
||||
|
||||
### Grok or Dissect? Or both? [_grok_or_dissect_or_both]
|
||||
|
||||
The [`dissect`](/reference/plugins-filters-dissect.md) filter plugin is another way to extract unstructured event data into fields using delimiters.
|
||||
|
||||
Dissect differs from Grok in that it does not use regular expressions and is faster. Dissect works well when data is reliably repeated. Grok is a better choice when the structure of your text varies from line to line.
|
||||
|
||||
You can use both Dissect and Grok for a hybrid use case when a section of the line is reliably repeated, but the entire line is not. The Dissect filter can deconstruct the section of the line that is repeated. The Grok filter can process the remaining field values with more regex predictability.
|
||||
|
||||
|
||||
|
||||
## Grok Basics [_grok_basics]
|
||||
|
||||
Grok works by combining text patterns into something that matches your logs.
|
||||
|
||||
The syntax for a grok pattern is `%{SYNTAX:SEMANTIC}`
|
||||
|
||||
The `SYNTAX` is the name of the pattern that will match your text. For example, `3.44` will be matched by the `NUMBER` pattern and `55.3.244.1` will be matched by the `IP` pattern. The syntax is how you match.
|
||||
|
||||
The `SEMANTIC` is the identifier you give to the piece of text being matched. For example, `3.44` could be the duration of an event, so you could call it simply `duration`. Further, a string `55.3.244.1` might identify the `client` making a request.
|
||||
|
||||
For the above example, your grok filter would look something like this:
|
||||
|
||||
```ruby
|
||||
%{NUMBER:duration} %{IP:client}
|
||||
```
|
||||
|
||||
Optionally you can add a data type conversion to your grok pattern. By default all semantics are saved as strings. If you wish to convert a semantic’s data type, for example change a string to an integer then suffix it with the target data type. For example `%{NUMBER:num:int}` which converts the `num` semantic from a string to an integer. Currently the only supported conversions are `int` and `float`.
|
||||
|
||||
With that idea of a syntax and semantic, we can pull out useful fields from a sample log like this fictional http request log:
|
||||
|
||||
```ruby
|
||||
55.3.244.1 GET /index.html 15824 0.043
|
||||
```
|
||||
|
||||
The pattern for this could be:
|
||||
|
||||
```ruby
|
||||
%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}
|
||||
```
|
||||
|
||||
A more realistic example, let’s read these logs from a file:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
file {
|
||||
path => "/var/log/http.log"
|
||||
}
|
||||
}
|
||||
filter {
|
||||
grok {
|
||||
match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
After the grok filter, the event will have a few extra fields in it:
|
||||
|
||||
* `client: 55.3.244.1`
|
||||
* `method: GET`
|
||||
* `request: /index.html`
|
||||
* `bytes: 15824`
|
||||
* `duration: 0.043`
|
||||
|
||||
|
||||
## Regular Expressions [_regular_expressions]
|
||||
|
||||
Grok sits on top of regular expressions, so any regular expressions are valid in grok as well. The regular expression library is Oniguruma, and you can see the full supported regexp syntax [on the Oniguruma site](https://github.com/kkos/oniguruma/blob/master/doc/RE).
|
||||
|
||||
|
||||
## Custom Patterns [_custom_patterns]
|
||||
|
||||
Sometimes logstash doesn’t have a pattern you need. For this, you have a few options.
|
||||
|
||||
First, you can use the Oniguruma syntax for named capture which will let you match a piece of text and save it as a field:
|
||||
|
||||
```ruby
|
||||
(?<field_name>the pattern here)
|
||||
```
|
||||
|
||||
For example, postfix logs have a `queue id` that is an 10 or 11-character hexadecimal value. I can capture that easily like this:
|
||||
|
||||
```ruby
|
||||
(?<queue_id>[0-9A-F]{10,11})
|
||||
```
|
||||
|
||||
Alternately, you can create a custom patterns file.
|
||||
|
||||
* Create a directory called `patterns` with a file in it called `extra` (the file name doesn’t matter, but name it meaningfully for yourself)
|
||||
* In that file, write the pattern you need as the pattern name, a space, then the regexp for that pattern.
|
||||
|
||||
For example, doing the postfix queue id example as above:
|
||||
|
||||
```ruby
|
||||
# contents of ./patterns/postfix:
|
||||
POSTFIX_QUEUEID [0-9A-F]{10,11}
|
||||
```
|
||||
|
||||
Then use the `patterns_dir` setting in this plugin to tell logstash where your custom patterns directory is. Here’s a full example with a sample log:
|
||||
|
||||
```ruby
|
||||
Jan 1 06:25:43 mailserver14 postfix/cleanup[21403]: BEF25A72965: message-id=<20130101142543.5828399CCAF@mailserver14.example.com>
|
||||
```
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
patterns_dir => ["./patterns"]
|
||||
match => { "message" => "%{SYSLOGBASE} %{POSTFIX_QUEUEID:queue_id}: %{GREEDYDATA:syslog_message}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above will match and result in the following fields:
|
||||
|
||||
* `timestamp: Jan 1 06:25:43`
|
||||
* `logsource: mailserver14`
|
||||
* `program: postfix/cleanup`
|
||||
* `pid: 21403`
|
||||
* `queue_id: BEF25A72965`
|
||||
* `syslog_message: message-id=<20130101142543.5828399CCAF@mailserver14.example.com>`
|
||||
|
||||
The `timestamp`, `logsource`, `program`, and `pid` fields come from the `SYSLOGBASE` pattern which itself is defined by other patterns.
|
||||
|
||||
Another option is to define patterns *inline* in the filter using `pattern_definitions`. This is mostly for convenience and allows user to define a pattern which can be used just in that filter. This newly defined patterns in `pattern_definitions` will not be available outside of that particular `grok` filter.
|
||||
|
||||
|
||||
## Migrating to Elastic Common Schema (ECS) [plugins-filters-grok-ecs]
|
||||
|
||||
To ease migration to the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)), the filter plugin offers a new set of ECS-compliant patterns in addition to the existing patterns. The new ECS pattern definitions capture event field names that are compliant with the schema.
|
||||
|
||||
The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs_compatibility`](#plugins-filters-grok-ecs_compatibility) setting to switch modes.
|
||||
|
||||
New features and enhancements will be added to the ECS-compliant files. The legacy patterns may still receive bug fixes which are backwards compatible.
|
||||
|
||||
|
||||
## Grok Filter Configuration Options [plugins-filters-grok-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-grok-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`break_on_match`](#plugins-filters-grok-break_on_match) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`ecs_compatibility`](#plugins-filters-grok-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`keep_empty_captures`](#plugins-filters-grok-keep_empty_captures) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`match`](#plugins-filters-grok-match) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`named_captures_only`](#plugins-filters-grok-named_captures_only) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`overwrite`](#plugins-filters-grok-overwrite) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`pattern_definitions`](#plugins-filters-grok-pattern_definitions) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`patterns_dir`](#plugins-filters-grok-patterns_dir) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`patterns_files_glob`](#plugins-filters-grok-patterns_files_glob) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`tag_on_failure`](#plugins-filters-grok-tag_on_failure) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`tag_on_timeout`](#plugins-filters-grok-tag_on_timeout) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`timeout_millis`](#plugins-filters-grok-timeout_millis) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`timeout_scope`](#plugins-filters-grok-timeout_scope) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-grok-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `break_on_match` [plugins-filters-grok-break_on_match]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Break on first match. The first successful match by grok will result in the filter being finished. If you want grok to try all patterns (maybe you are parsing different things), then set this to false.
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-filters-grok-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: the plugin will load legacy (built-in) pattern definitions
|
||||
* `v1`,`v8`: all patterns provided by the plugin will use ECS compliant captures
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`.
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). The value of this setting affects extracted event field names when a composite pattern (such as `HTTPD_COMMONLOG`) is matched.
|
||||
|
||||
|
||||
### `keep_empty_captures` [plugins-filters-grok-keep_empty_captures]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
If `true`, keep empty captures as event fields.
|
||||
|
||||
|
||||
### `match` [plugins-filters-grok-match]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
A hash that defines the mapping of *where to look*, and with which patterns.
|
||||
|
||||
For example, the following will match an existing value in the `message` field for the given pattern, and if a match is found will add the field `duration` to the event with the captured value:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => {
|
||||
"message" => "Duration: %{NUMBER:duration}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If you need to match multiple patterns against a single field, the value can be an array of patterns:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => {
|
||||
"message" => [
|
||||
"Duration: %{NUMBER:duration}",
|
||||
"Speed: %{NUMBER:speed}"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To perform matches on multiple fields just use multiple entries in the `match` hash:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => {
|
||||
"speed" => "Speed: %{NUMBER:speed}"
|
||||
"duration" => "Duration: %{NUMBER:duration}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
However, if one pattern depends on a field created by a previous pattern, separate these into two separate grok filters:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => {
|
||||
"message" => "Hi, the rest of the message is: %{GREEDYDATA:rest}"
|
||||
}
|
||||
}
|
||||
grok {
|
||||
match => {
|
||||
"rest" => "a number %{NUMBER:number}, and a word %{WORD:word}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `named_captures_only` [plugins-filters-grok-named_captures_only]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
If `true`, only store named captures from grok.
|
||||
|
||||
|
||||
### `overwrite` [plugins-filters-grok-overwrite]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
The fields to overwrite.
|
||||
|
||||
This allows you to overwrite a value in a field that already exists.
|
||||
|
||||
For example, if you have a syslog line in the `message` field, you can overwrite the `message` field with part of the match like so:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => { "message" => "%{SYSLOGBASE} %{DATA:message}" }
|
||||
overwrite => [ "message" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In this case, a line like `May 29 16:37:11 sadness logger: hello world` will be parsed and `hello world` will overwrite the original message.
|
||||
|
||||
If you are using a field reference in `overwrite`, you must use the field reference in the pattern. Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
grok {
|
||||
match => { "somefield" => "%{NUMBER} %{GREEDYDATA:[nested][field][test]}" }
|
||||
overwrite => [ "[nested][field][test]" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `pattern_definitions` [plugins-filters-grok-pattern_definitions]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
A hash of pattern-name and pattern tuples defining custom patterns to be used by the current filter. Patterns matching existing names will override the pre-existing definition. Think of this as inline patterns available just for this definition of grok
|
||||
|
||||
|
||||
### `patterns_dir` [plugins-filters-grok-patterns_dir]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
Logstash ships by default with a bunch of patterns, so you don’t necessarily need to define this yourself unless you are adding additional patterns. You can point to multiple pattern directories using this setting. Note that Grok will read all files in the directory matching the patterns_files_glob and assume it’s a pattern file (including any tilde backup files).
|
||||
|
||||
```ruby
|
||||
patterns_dir => ["/opt/logstash/patterns", "/opt/logstash/extra_patterns"]
|
||||
```
|
||||
|
||||
Pattern files are plain text with format:
|
||||
|
||||
```ruby
|
||||
NAME PATTERN
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```ruby
|
||||
NUMBER \d+
|
||||
```
|
||||
|
||||
The patterns are loaded when the pipeline is created.
|
||||
|
||||
|
||||
### `patterns_files_glob` [plugins-filters-grok-patterns_files_glob]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"*"`
|
||||
|
||||
Glob pattern, used to select the pattern files in the directories specified by patterns_dir
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-grok-tag_on_failure]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_grokparsefailure"]`
|
||||
|
||||
Append values to the `tags` field when there has been no successful match
|
||||
|
||||
|
||||
### `tag_on_timeout` [plugins-filters-grok-tag_on_timeout]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"_groktimeout"`
|
||||
|
||||
Tag to apply if a grok regexp times out.
|
||||
|
||||
|
||||
### `target` [plugins-filters-grok-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting
|
||||
|
||||
Define target namespace for placing matches.
|
||||
|
||||
|
||||
### `timeout_millis` [plugins-filters-grok-timeout_millis]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `30000`
|
||||
|
||||
Attempt to terminate regexps after this amount of time. This applies per pattern if multiple patterns are applied This will never timeout early, but may take a little longer to timeout. Actual timeout is approximate based on a 250ms quantization. Set to 0 to disable timeouts
|
||||
|
||||
|
||||
### `timeout_scope` [plugins-filters-grok-timeout_scope]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"pattern"`
|
||||
* Supported values are `"pattern"` and `"event"`
|
||||
|
||||
When multiple patterns are provided to [`match`](#plugins-filters-grok-match), the timeout has historically applied to *each* pattern, incurring overhead for each and every pattern that is attempted; when the grok filter is configured with `timeout_scope => event`, the plugin instead enforces a single timeout across all attempted matches on the event, so it can achieve similar safeguard against runaway matchers with significantly less overhead.
|
||||
|
||||
It’s usually better to scope the timeout for the whole event.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-grok-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-grok-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-grok-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-grok-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-grok-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-grok-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-grok-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-grok-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-grok-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
grok {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
grok {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-grok-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
grok {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
grok {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-grok-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-grok-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 grok filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
grok {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-grok-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-grok-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
grok {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
grok {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-grok-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
grok {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
grok {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,620 +0,0 @@
|
|||
---
|
||||
navigation_title: "http"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-http.html
|
||||
---
|
||||
|
||||
# HTTP filter plugin [plugins-filters-http]
|
||||
|
||||
|
||||
* Plugin version: v2.0.0
|
||||
* Released on: 2024-12-18
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-http/blob/v2.0.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-http-index.md).
|
||||
|
||||
## Getting help [_getting_help_144]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-http). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_143]
|
||||
|
||||
The HTTP filter provides integration with external web services/REST APIs.
|
||||
|
||||
|
||||
## Compatibility with the Elastic Common Schema (ECS) [plugins-filters-http-ecs]
|
||||
|
||||
The plugin includes sensible defaults that change based on [ECS compatibility mode](#plugins-filters-http-ecs_compatibility). When targeting an ECS version, headers are set as `@metadata` and the `target_body` is a required option. See [`target_body`](#plugins-filters-http-target_body), and [`target_headers`](#plugins-filters-http-target_headers).
|
||||
|
||||
|
||||
## HTTP Filter Configuration Options [plugins-filters-http-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-http-common-options) described later.
|
||||
|
||||
::::{note}
|
||||
As of version `2.0.0` of this plugin, a number of previously deprecated settings related to SSL have been removed. Please check out [HTTP Filter Obsolete Configuration Options](#plugins-filters-http-obsolete-options) for details.
|
||||
::::
|
||||
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`body`](#plugins-filters-http-body) | String, Array or Hash | No |
|
||||
| [`body_format`](#plugins-filters-http-body_format) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ecs_compatibility`](#plugins-filters-http-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`headers`](#plugins-filters-http-headers) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`query`](#plugins-filters-http-query) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`target_body`](#plugins-filters-http-target_body) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target_headers`](#plugins-filters-http-target_headers) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`url`](#plugins-filters-http-url) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`verb`](#plugins-filters-http-verb) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
There are also multiple configuration options related to the HTTP connectivity:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`automatic_retries`](#plugins-filters-http-automatic_retries) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`connect_timeout`](#plugins-filters-http-connect_timeout) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`cookies`](#plugins-filters-http-cookies) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`follow_redirects`](#plugins-filters-http-follow_redirects) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`keepalive`](#plugins-filters-http-keepalive) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`password`](#plugins-filters-http-password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`pool_max`](#plugins-filters-http-pool_max) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`pool_max_per_route`](#plugins-filters-http-pool_max_per_route) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`proxy`](#plugins-filters-http-proxy) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`request_timeout`](#plugins-filters-http-request_timeout) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`retry_non_idempotent`](#plugins-filters-http-retry_non_idempotent) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`socket_timeout`](#plugins-filters-http-socket_timeout) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`ssl_certificate`](#plugins-filters-http-ssl_certificate) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_certificate_authorities`](#plugins-filters-http-ssl_certificate_authorities) | list of [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_cipher_suites`](#plugins-filters-http-ssl_cipher_suites) | list of [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_enabled`](#plugins-filters-http-ssl_enabled) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`ssl_keystore_password`](#plugins-filters-http-ssl_keystore_password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`ssl_keystore_path`](#plugins-filters-http-ssl_keystore_path) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_keystore_type`](#plugins-filters-http-ssl_keystore_type) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_supported_protocols`](#plugins-filters-http-ssl_supported_protocols) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_truststore_password`](#plugins-filters-http-ssl_truststore_password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`ssl_truststore_path`](#plugins-filters-http-ssl_truststore_path) | [path](/reference/configuration-file-structure.md#path) | No |
|
||||
| [`ssl_truststore_type`](#plugins-filters-http-ssl_truststore_type) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ssl_verification_mode`](#plugins-filters-http-ssl_verification_mode) | [string](/reference/configuration-file-structure.md#string), one of `["full", "none"]` | No |
|
||||
| [`user`](#plugins-filters-http-user) | [string](/reference/configuration-file-structure.md#string) | no |
|
||||
| [`validate_after_inactivity`](#plugins-filters-http-validate_after_inactivity) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-http-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `body` [plugins-filters-http-body]
|
||||
|
||||
* Value type can be a [string](/reference/configuration-file-structure.md#string), [number](/reference/configuration-file-structure.md#number), [array](/reference/configuration-file-structure.md#array) or [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value
|
||||
|
||||
The body of the HTTP request to be sent.
|
||||
|
||||
An example to send `body` as json
|
||||
|
||||
```
|
||||
http {
|
||||
body => {
|
||||
"key1" => "constant_value"
|
||||
"key2" => "%{[field][reference]}"
|
||||
}
|
||||
body_format => "json"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `body_format` [plugins-filters-http-body_format]
|
||||
|
||||
* Value type can be either `"json"` or `"text"`
|
||||
* Default value is `"text"`
|
||||
|
||||
If set to `"json"` and the [`body`](#plugins-filters-http-body) is a type of [array](/reference/configuration-file-structure.md#array) or [hash](/reference/configuration-file-structure.md#hash), the body will be serialized as JSON. Otherwise it is sent as is.
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-filters-http-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: does not use ECS-compatible field names (for example, response headers target `headers` field by default)
|
||||
* `v1`, `v8`: avoids field names that might conflict with Elastic Common Schema (for example, headers are added as metadata)
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`.
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). The value of this setting affects the *default* value of [`target_body`](#plugins-filters-http-target_body) and [`target_headers`](#plugins-filters-http-target_headers).
|
||||
|
||||
|
||||
### `headers` [plugins-filters-http-headers]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value
|
||||
|
||||
The HTTP headers to be sent in the request. Both the names of the headers and their values can reference values from event fields.
|
||||
|
||||
|
||||
### `query` [plugins-filters-http-query]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value
|
||||
|
||||
Define the query string parameters (key-value pairs) to be sent in the HTTP request.
|
||||
|
||||
|
||||
### `target_body` [plugins-filters-http-target_body]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value depends on whether [`ecs_compatibility`](#plugins-filters-http-ecs_compatibility) is enabled:
|
||||
|
||||
* ECS Compatibility disabled: `"[body]"
|
||||
* ECS Compatibility enabled: no default value, needs to be specified explicitly
|
||||
|
||||
|
||||
Define the target field for placing the body of the HTTP response.
|
||||
|
||||
|
||||
### `target_headers` [plugins-filters-http-target_headers]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value depends on whether [`ecs_compatibility`](#plugins-filters-http-ecs_compatibility) is enabled:
|
||||
|
||||
* ECS Compatibility disabled: `"[headers]"`
|
||||
* ECS Compatibility enabled: `"[@metadata][filter][http][response][headers]"`
|
||||
|
||||
|
||||
Define the target field for placing the headers of the HTTP response.
|
||||
|
||||
|
||||
### `url` [plugins-filters-http-url]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value
|
||||
|
||||
The URL to send the request to. The value can be fetched from event fields.
|
||||
|
||||
|
||||
### `verb` [plugins-filters-http-verb]
|
||||
|
||||
* Value type can be either `"GET"`, `"HEAD"`, `"PATCH"`, `"DELETE"`, `"POST"`, `"PUT"`
|
||||
* Default value is `"GET"`
|
||||
|
||||
The verb to be used for the HTTP request.
|
||||
|
||||
|
||||
|
||||
## HTTP Filter Connectivity Options [plugins-filters-http-connectivity-options]
|
||||
|
||||
### `automatic_retries` [plugins-filters-http-automatic_retries]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `1`
|
||||
|
||||
How many times should the client retry a failing URL. We highly recommend NOT setting this value to zero if keepalive is enabled. Some servers incorrectly end keepalives early requiring a retry! Note: if `retry_non_idempotent` is set only GET, HEAD, PUT, DELETE, OPTIONS, and TRACE requests will be retried.
|
||||
|
||||
|
||||
### `connect_timeout` [plugins-filters-http-connect_timeout]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `10`
|
||||
|
||||
Timeout (in seconds) to wait for a connection to be established. Default is `10s`
|
||||
|
||||
|
||||
### `cookies` [plugins-filters-http-cookies]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Enable cookie support. With this enabled the client will persist cookies across requests as a normal web browser would. Enabled by default
|
||||
|
||||
|
||||
### `follow_redirects` [plugins-filters-http-follow_redirects]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Should redirects be followed? Defaults to `true`
|
||||
|
||||
|
||||
### `keepalive` [plugins-filters-http-keepalive]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Turn this on to enable HTTP keepalive support. We highly recommend setting `automatic_retries` to at least one with this to fix interactions with broken keepalive implementations.
|
||||
|
||||
|
||||
### `password` [plugins-filters-http-password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Password to be used in conjunction with the username for HTTP authentication.
|
||||
|
||||
|
||||
### `pool_max` [plugins-filters-http-pool_max]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `50`
|
||||
|
||||
Max number of concurrent connections. Defaults to `50`
|
||||
|
||||
|
||||
### `pool_max_per_route` [plugins-filters-http-pool_max_per_route]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `25`
|
||||
|
||||
Max number of concurrent connections to a single host. Defaults to `25`
|
||||
|
||||
|
||||
### `proxy` [plugins-filters-http-proxy]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
If you’d like to use an HTTP proxy . This supports multiple configuration syntaxes:
|
||||
|
||||
1. Proxy host in form: `http://proxy.org:1234`
|
||||
2. Proxy host in form: `{host => "proxy.org", port => 80, scheme => 'http', user => 'username@host', password => 'password'}`
|
||||
3. Proxy host in form: `{url => 'http://proxy.org:1234', user => 'username@host', password => 'password'}`
|
||||
|
||||
|
||||
### `request_timeout` [plugins-filters-http-request_timeout]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `60`
|
||||
|
||||
Timeout (in seconds) for the entire request.
|
||||
|
||||
|
||||
### `retry_non_idempotent` [plugins-filters-http-retry_non_idempotent]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
If `automatic_retries` is enabled this will cause non-idempotent HTTP verbs (such as POST) to be retried.
|
||||
|
||||
|
||||
### `socket_timeout` [plugins-filters-http-socket_timeout]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `10`
|
||||
|
||||
Timeout (in seconds) to wait for data on the socket. Default is `10s`
|
||||
|
||||
|
||||
### `ssl_certificate` [plugins-filters-http-ssl_certificate]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
SSL certificate to use to authenticate the client. This certificate should be an OpenSSL-style X.509 certificate file.
|
||||
|
||||
::::{note}
|
||||
This setting can be used only if [`ssl_key`](#plugins-filters-http-ssl_key) is set.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `ssl_certificate_authorities` [plugins-filters-http-ssl_certificate_authorities]
|
||||
|
||||
* Value type is a list of [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting
|
||||
|
||||
The .cer or .pem CA files to validate the server’s certificate.
|
||||
|
||||
|
||||
### `ssl_cipher_suites` [plugins-filters-http-ssl_cipher_suites]
|
||||
|
||||
* Value type is a list of [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting
|
||||
|
||||
The list of cipher suites to use, listed by priorities. Supported cipher suites vary depending on the Java and protocol versions.
|
||||
|
||||
|
||||
### `ssl_enabled` [plugins-filters-http-ssl_enabled]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Enable SSL/TLS secured communication. It must be `true` for other `ssl_` options to take effect.
|
||||
|
||||
|
||||
### `ssl_key` [plugins-filters-http-ssl_key]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
OpenSSL-style RSA private key that corresponds to the [`ssl_certificate`](#plugins-filters-http-ssl_certificate).
|
||||
|
||||
::::{note}
|
||||
This setting can be used only if [`ssl_certificate`](#plugins-filters-http-ssl_certificate) is set.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `ssl_keystore_password` [plugins-filters-http-ssl_keystore_password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the keystore password
|
||||
|
||||
|
||||
### `ssl_keystore_path` [plugins-filters-http-ssl_keystore_path]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The keystore used to present a certificate to the server. It can be either `.jks` or `.p12`
|
||||
|
||||
|
||||
### `ssl_keystore_type` [plugins-filters-http-ssl_keystore_type]
|
||||
|
||||
* Value can be any of: `jks`, `pkcs12`
|
||||
* If not provided, the value will be inferred from the keystore filename.
|
||||
|
||||
The format of the keystore file. It must be either `jks` or `pkcs12`.
|
||||
|
||||
|
||||
### `ssl_supported_protocols` [plugins-filters-http-ssl_supported_protocols]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Allowed values are: `'TLSv1.1'`, `'TLSv1.2'`, `'TLSv1.3'`
|
||||
* Default depends on the JDK being used. With up-to-date Logstash, the default is `['TLSv1.2', 'TLSv1.3']`. `'TLSv1.1'` is not considered secure and is only provided for legacy applications.
|
||||
|
||||
List of allowed SSL/TLS versions to use when establishing a connection to the HTTP endpoint.
|
||||
|
||||
For Java 8 `'TLSv1.3'` is supported only since **8u262** (AdoptOpenJDK), but requires that you set the `LS_JAVA_OPTS="-Djdk.tls.client.protocols=TLSv1.3"` system property in Logstash.
|
||||
|
||||
::::{note}
|
||||
If you configure the plugin to use `'TLSv1.1'` on any recent JVM, such as the one packaged with Logstash, the protocol is disabled by default and needs to be enabled manually by changing `jdk.tls.disabledAlgorithms` in the **$JDK_HOME/conf/security/java.security** configuration file. That is, `TLSv1.1` needs to be removed from the list.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `ssl_truststore_password` [plugins-filters-http-ssl_truststore_password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the truststore password
|
||||
|
||||
|
||||
### `ssl_truststore_path` [plugins-filters-http-ssl_truststore_path]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The truststore to validate the server’s certificate. It can be either `.jks` or `.p12`.
|
||||
|
||||
|
||||
### `ssl_truststore_type` [plugins-filters-http-ssl_truststore_type]
|
||||
|
||||
* Value can be any of: `jks`, `pkcs12`
|
||||
* If not provided, the value will be inferred from the truststore filename.
|
||||
|
||||
The format of the truststore file. It must be either `jks` or `pkcs12`.
|
||||
|
||||
|
||||
### `ssl_verification_mode` [plugins-filters-http-ssl_verification_mode]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are: `full`, `none`
|
||||
* Default value is `full`
|
||||
|
||||
Controls the verification of server certificates. The `full` option verifies that the provided certificate is signed by a trusted authority (CA) and also that the server’s hostname (or IP address) matches the names identified within the certificate.
|
||||
|
||||
The `none` setting performs no verification of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors. Using `none` in production environments is strongly discouraged.
|
||||
|
||||
|
||||
### `user` [plugins-filters-http-user]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Username to use with HTTP authentication for ALL requests. Note that you can also set this per-URL. If you set this you must also set the `password` option.
|
||||
|
||||
|
||||
### `validate_after_inactivity` [plugins-filters-http-validate_after_inactivity]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `200`
|
||||
|
||||
How long to wait before checking for a stale connection to determine if a keepalive request is needed. Consider setting this value lower than the default, possibly to 0, if you get connection errors regularly.
|
||||
|
||||
This client is based on Apache Commons. Here’s how the [Apache Commons documentation](https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.md#setValidateAfterInactivity(int)) describes this option: "Defines period of inactivity in milliseconds after which persistent connections must be re-validated prior to being leased to the consumer. Non-positive value passed to this method disables connection validation. This check helps detect connections that have become stale (half-closed) while kept inactive in the pool."
|
||||
|
||||
|
||||
|
||||
## HTTP Filter Obsolete Configuration Options [plugins-filters-http-obsolete-options]
|
||||
|
||||
::::{warning}
|
||||
As of version `2.0.0` of this plugin, some configuration options have been replaced. The plugin will fail to start if it contains any of these obsolete options.
|
||||
::::
|
||||
|
||||
|
||||
| Setting | Replaced by |
|
||||
| --- | --- |
|
||||
| cacert | [`ssl_certificate_authorities`](#plugins-filters-http-ssl_certificate_authorities) |
|
||||
| client_cert | [`ssl_certificate`](#plugins-filters-http-ssl_certificate) |
|
||||
| client_key | [`ssl_key`](#plugins-filters-http-ssl_key) |
|
||||
| keystore | [`ssl_keystore_path`](#plugins-filters-http-ssl_keystore_path) |
|
||||
| keystore_password | [`ssl_keystore_password`](#plugins-filters-http-ssl_keystore_password) |
|
||||
| keystore_type | [`ssl_keystore_type`](#plugins-filters-http-ssl_keystore_type) |
|
||||
| truststore | [`ssl_truststore_path`](#plugins-filters-http-ssl_truststore_path) |
|
||||
| truststore_password | [`ssl_truststore_password`](#plugins-filters-http-ssl_truststore_password) |
|
||||
| truststore_type | [`ssl_truststore_type`](#plugins-filters-http-ssl_truststore_type) |
|
||||
|
||||
|
||||
## Common options [plugins-filters-http-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-http-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-http-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-http-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-http-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-http-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-http-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-http-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-http-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
http {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
http {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-http-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
http {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
http {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-http-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-http-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 http filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
http {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-http-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-http-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
http {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
http {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-http-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
http {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
http {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
|
@ -1,230 +0,0 @@
|
|||
---
|
||||
navigation_title: "i18n"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-i18n.html
|
||||
---
|
||||
|
||||
# I18n filter plugin [plugins-filters-i18n]
|
||||
|
||||
|
||||
* Plugin version: v3.0.3
|
||||
* Released on: 2017-11-07
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-i18n/blob/v3.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-i18n-index.md).
|
||||
|
||||
## Installation [_installation_61]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-i18n`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_145]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-i18n). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_144]
|
||||
|
||||
The i18n filter allows you to remove special characters from a field
|
||||
|
||||
|
||||
## I18n Filter Configuration Options [plugins-filters-i18n-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-i18n-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`transliterate`](#plugins-filters-i18n-transliterate) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-i18n-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `transliterate` [plugins-filters-i18n-transliterate]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Replaces non-ASCII characters with an ASCII approximation, or if none exists, a replacement character which defaults to `?`
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
i18n {
|
||||
transliterate => ["field1", "field2"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-i18n-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-i18n-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-i18n-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-i18n-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-i18n-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-i18n-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-i18n-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-i18n-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-i18n-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
i18n {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
i18n {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-i18n-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
i18n {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
i18n {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-i18n-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-i18n-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 i18n filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
i18n {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-i18n-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-i18n-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
i18n {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
i18n {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-i18n-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
i18n {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
i18n {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,246 +0,0 @@
|
|||
---
|
||||
navigation_title: "java_uuid"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-java_uuid.html
|
||||
---
|
||||
|
||||
# Java_uuid filter plugin [plugins-filters-java_uuid]
|
||||
|
||||
|
||||
**{{ls}} Core Plugin.** The java_uuid filter plugin cannot be installed or uninstalled independently of {{ls}}.
|
||||
|
||||
## Getting help [_getting_help_146]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash).
|
||||
|
||||
|
||||
## Description [_description_145]
|
||||
|
||||
The uuid filter allows you to generate a [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier) and add it as a field to each processed event.
|
||||
|
||||
This is useful if you need to generate a string that’s unique for every event even if the same input is processed multiple times. If you want to generate strings that are identical each time an event with the same content is processed (i.e., a hash), you should use the [fingerprint filter](/reference/plugins-filters-fingerprint.md) instead.
|
||||
|
||||
The generated UUIDs follow the version 4 definition in [RFC 4122](https://tools.ietf.org/html/rfc4122) and will be represented in standard hexadecimal string format, e.g. "e08806fe-02af-406c-bbde-8a5ae4475e57".
|
||||
|
||||
|
||||
## Java_uuid Filter Configuration Options [plugins-filters-java_uuid-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-java_uuid-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`overwrite`](#plugins-filters-java_uuid-overwrite) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`target`](#plugins-filters-java_uuid-target) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
|
||||
Also see [Common options](#plugins-filters-java_uuid-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `overwrite` [plugins-filters-java_uuid-overwrite]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Determines if an existing value in the field specified by the `target` option should be overwritten by the filter.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
java_uuid {
|
||||
target => "uuid"
|
||||
overwrite => true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `target` [plugins-filters-java_uuid-target]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Specifies the name of the field in which the generated UUID should be stored.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
java_uuid {
|
||||
target => "uuid"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-java_uuid-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-java_uuid-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-java_uuid-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-java_uuid-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-java_uuid-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-java_uuid-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-java_uuid-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-java_uuid-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-java_uuid-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
java_uuid {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
java_uuid {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-java_uuid-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
java_uuid {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
java_uuid {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-java_uuid-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-java_uuid-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 java_uuid filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
java_uuid {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-java_uuid-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-java_uuid-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
java_uuid {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
java_uuid {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-java_uuid-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
java_uuid {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
java_uuid {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,672 +0,0 @@
|
|||
---
|
||||
navigation_title: "jdbc_static"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html
|
||||
---
|
||||
|
||||
# Jdbc_static filter plugin [plugins-filters-jdbc_static]
|
||||
|
||||
|
||||
* A component of the [jdbc integration plugin](/reference/plugins-integrations-jdbc.md)
|
||||
* Integration version: v5.5.2
|
||||
* Released on: 2024-12-23
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-integration-jdbc/blob/v5.5.2/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-jdbc_static-index.md).
|
||||
|
||||
## Getting help [_getting_help_147]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-integration-jdbc). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_146]
|
||||
|
||||
This filter enriches events with data pre-loaded from a remote database.
|
||||
|
||||
This filter is best suited for enriching events with reference data that is static or does not change very often, such as environments, users, and products.
|
||||
|
||||
This filter works by fetching data from a remote database, caching it in a local, in-memory [Apache Derby](https://db.apache.org/derby/manuals/#docs_10.14) database, and using lookups to enrich events with data cached in the local database. You can set up the filter to load the remote data once (for static data), or you can schedule remote loading to run periodically (for data that needs to be refreshed).
|
||||
|
||||
To define the filter, you specify three main sections: local_db_objects, loaders, and lookups.
|
||||
|
||||
**local_db_objects**
|
||||
: Define the columns, types, and indexes used to build the local database structure. The column names and types should match the external database. Define as many of these objects as needed to build the local database structure.
|
||||
|
||||
**loaders**
|
||||
: Query the external database to fetch the dataset that will be cached locally. Define as many loaders as needed to fetch the remote data. Each loader should fill a table defined by `local_db_objects`. Make sure the column names and datatypes in the loader SQL statement match the columns defined under `local_db_objects`. Each loader has an independent remote database connection.
|
||||
|
||||
**lookups**
|
||||
: Perform lookup queries on the local database to enrich the events. Define as many lookups as needed to enrich the event from all lookup tables in one pass. Ideally the SQL statement should only return one row. Any rows are converted to Hash objects and are stored in a target field that is an Array.
|
||||
|
||||
The following example config fetches data from a remote database, caches it in a local database, and uses lookups to enrich events with data cached in the local database.
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_static {
|
||||
loaders => [ <1>
|
||||
{
|
||||
id => "remote-servers"
|
||||
query => "select ip, descr from ref.local_ips order by ip"
|
||||
local_table => "servers"
|
||||
},
|
||||
{
|
||||
id => "remote-users"
|
||||
query => "select firstname, lastname, userid from ref.local_users order by userid"
|
||||
local_table => "users"
|
||||
}
|
||||
]
|
||||
local_db_objects => [ <2>
|
||||
{
|
||||
name => "servers"
|
||||
index_columns => ["ip"]
|
||||
columns => [
|
||||
["ip", "varchar(15)"],
|
||||
["descr", "varchar(255)"]
|
||||
]
|
||||
},
|
||||
{
|
||||
name => "users"
|
||||
index_columns => ["userid"]
|
||||
columns => [
|
||||
["firstname", "varchar(255)"],
|
||||
["lastname", "varchar(255)"],
|
||||
["userid", "int"]
|
||||
]
|
||||
}
|
||||
]
|
||||
local_lookups => [ <3>
|
||||
{
|
||||
id => "local-servers"
|
||||
query => "SELECT descr as description FROM servers WHERE ip = :ip"
|
||||
parameters => {ip => "[from_ip]"}
|
||||
target => "server"
|
||||
},
|
||||
{
|
||||
id => "local-users"
|
||||
query => "SELECT firstname, lastname FROM users WHERE userid = ? AND country = ?"
|
||||
prepared_parameters => ["[loggedin_userid]", "[user_nation]"] <4>
|
||||
target => "user" <5>
|
||||
default_hash => { <6>
|
||||
firstname => nil
|
||||
lastname => nil
|
||||
}
|
||||
}
|
||||
]
|
||||
# using add_field here to add & rename values to the event root
|
||||
add_field => { server_name => "%{[server][0][description]}" } <7>
|
||||
add_field => { user_firstname => "%{[user][0][firstname]}" }
|
||||
add_field => { user_lastname => "%{[user][0][lastname]}" }
|
||||
remove_field => ["server", "user"]
|
||||
staging_directory => "/tmp/logstash/jdbc_static/import_data"
|
||||
loader_schedule => "* */2 * * *" <8>
|
||||
jdbc_user => "logstash"
|
||||
jdbc_password => "example"
|
||||
jdbc_driver_class => "org.postgresql.Driver"
|
||||
jdbc_driver_library => "/tmp/logstash/vendor/postgresql-42.1.4.jar"
|
||||
jdbc_connection_string => "jdbc:postgresql://remotedb:5432/ls_test_2"
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
if "_jdbcstaticdefaultsused" in [tags] {
|
||||
# Print all the not found users
|
||||
stdout { }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
1. Queries an external database to fetch the dataset that will be cached locally.
|
||||
2. Defines the columns, types, and indexes used to build the local database structure. The column names and types should match the external database. The order of table definitions is significant and should match that of the loader queries. See [Loader column and local_db_object order dependency](#plugins-filters-jdbc_static-object_order).
|
||||
3. Performs lookup queries on the local database to enrich the events.
|
||||
4. Local lookup queries can also use prepared statements where the parameters follow the positional ordering.
|
||||
5. Specifies the event field that will store the looked-up data. If the lookup returns multiple columns, the data is stored as a JSON object within the field.
|
||||
6. When the user is not found in the database, an event is created using data from the [`local_lookups`](#plugins-filters-jdbc_static-local_lookups) `default hash` setting, and the event is tagged with the list set in [`tag_on_default_use`](#plugins-filters-jdbc_static-tag_on_default_use).
|
||||
7. Takes data from the JSON object and stores it in top-level event fields for easier analysis in Kibana.
|
||||
8. Runs loaders every 2 hours.
|
||||
|
||||
|
||||
Here’s a full example:
|
||||
|
||||
```json
|
||||
input {
|
||||
generator {
|
||||
lines => [
|
||||
'{"from_ip": "10.2.3.20", "app": "foobar", "amount": 32.95}',
|
||||
'{"from_ip": "10.2.3.30", "app": "barfoo", "amount": 82.95}',
|
||||
'{"from_ip": "10.2.3.40", "app": "bazfoo", "amount": 22.95}'
|
||||
]
|
||||
count => 200
|
||||
}
|
||||
}
|
||||
|
||||
filter {
|
||||
json {
|
||||
source => "message"
|
||||
}
|
||||
|
||||
jdbc_static {
|
||||
loaders => [
|
||||
{
|
||||
id => "servers"
|
||||
query => "select ip, descr from ref.local_ips order by ip"
|
||||
local_table => "servers"
|
||||
}
|
||||
]
|
||||
local_db_objects => [
|
||||
{
|
||||
name => "servers"
|
||||
index_columns => ["ip"]
|
||||
columns => [
|
||||
["ip", "varchar(15)"],
|
||||
["descr", "varchar(255)"]
|
||||
]
|
||||
}
|
||||
]
|
||||
local_lookups => [
|
||||
{
|
||||
query => "select descr as description from servers WHERE ip = :ip"
|
||||
parameters => {ip => "[from_ip]"}
|
||||
target => "server"
|
||||
}
|
||||
]
|
||||
staging_directory => "/tmp/logstash/jdbc_static/import_data"
|
||||
loader_schedule => "*/30 * * * *"
|
||||
jdbc_user => "logstash"
|
||||
jdbc_password => "logstash??"
|
||||
jdbc_driver_class => "org.postgresql.Driver"
|
||||
jdbc_driver_library => "/Users/guy/tmp/logstash-6.0.0/vendor/postgresql-42.1.4.jar"
|
||||
jdbc_connection_string => "jdbc:postgresql://localhost:5432/ls_test_2"
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
stdout {
|
||||
codec => rubydebug {metadata => true}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Assuming the loader fetches the following data from a Postgres database:
|
||||
|
||||
```shell
|
||||
select * from ref.local_ips order by ip;
|
||||
ip | descr
|
||||
-----------+-----------------------
|
||||
10.2.3.10 | Authentication Server
|
||||
10.2.3.20 | Payments Server
|
||||
10.2.3.30 | Events Server
|
||||
10.2.3.40 | Payroll Server
|
||||
10.2.3.50 | Uploads Server
|
||||
```
|
||||
|
||||
The events are enriched with a description of the server based on the value of the IP:
|
||||
|
||||
```shell
|
||||
{
|
||||
"app" => "bazfoo",
|
||||
"sequence" => 0,
|
||||
"server" => [
|
||||
[0] {
|
||||
"description" => "Payroll Server"
|
||||
}
|
||||
],
|
||||
"amount" => 22.95,
|
||||
"@timestamp" => 2017-11-30T18:08:15.694Z,
|
||||
"@version" => "1",
|
||||
"host" => "Elastics-MacBook-Pro.local",
|
||||
"message" => "{\"from_ip\": \"10.2.3.40\", \"app\": \"bazfoo\", \"amount\": 22.95}",
|
||||
"from_ip" => "10.2.3.40"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Using this plugin with multiple pipelines [_using_this_plugin_with_multiple_pipelines]
|
||||
|
||||
::::{important}
|
||||
Logstash uses a single, in-memory Apache Derby instance as the lookup database engine for the entire JVM. Because each plugin instance uses a unique database inside the shared Derby engine, there should be no conflicts with plugins attempting to create and populate the same tables. This is true regardless of whether the plugins are defined in a single pipeline, or multiple pipelines. However, after setting up the filter, you should watch the lookup results and view the logs to verify correct operation.
|
||||
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Loader column and local_db_object order dependency [plugins-filters-jdbc_static-object_order]
|
||||
|
||||
::::{important}
|
||||
For loader performance reasons, the loading mechanism uses a CSV style file with an inbuilt Derby file import procedure to add the remote data to the local db. The retrieved columns are written to the CSV file as is and the file import procedure expects a 1 to 1 correspondence to the order of the columns specified in the local_db_object settings. Please ensure that this order is in place.
|
||||
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Compatibility with the Elastic Common Schema (ECS) [plugins-filters-jdbc_static-ecs]
|
||||
|
||||
This plugin is compatible with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). It behaves the same regardless of ECS compatibility, except giving a warning when ECS is enabled and `target` isn’t set.
|
||||
|
||||
::::{tip}
|
||||
Set the `target` option to avoid potential schema conflicts.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Jdbc_static filter configuration options [plugins-filters-jdbc_static-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-jdbc_static-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`jdbc_connection_string`](#plugins-filters-jdbc_static-jdbc_connection_string) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`jdbc_driver_class`](#plugins-filters-jdbc_static-jdbc_driver_class) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`jdbc_driver_library`](#plugins-filters-jdbc_static-jdbc_driver_library) | a valid filesystem path | No |
|
||||
| [`jdbc_password`](#plugins-filters-jdbc_static-jdbc_password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`jdbc_user`](#plugins-filters-jdbc_static-jdbc_user) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`tag_on_failure`](#plugins-filters-jdbc_static-tag_on_failure) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`tag_on_default_use`](#plugins-filters-jdbc_static-tag_on_default_use) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`staging_directory`](#plugins-filters-jdbc_static-staging_directory) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`loader_schedule`](#plugins-filters-jdbc_static-loader_schedule) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`loaders`](#plugins-filters-jdbc_static-loaders) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`local_db_objects`](#plugins-filters-jdbc_static-local_db_objects) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`local_lookups`](#plugins-filters-jdbc_static-local_lookups) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-jdbc_static-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `jdbc_connection_string` [plugins-filters-jdbc_static-jdbc_connection_string]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC connection string.
|
||||
|
||||
|
||||
### `jdbc_driver_class` [plugins-filters-jdbc_static-jdbc_driver_class]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC driver class to load, for example, "org.apache.derby.jdbc.ClientDriver".
|
||||
|
||||
::::{note}
|
||||
According to [Issue 43](https://github.com/logstash-plugins/logstash-input-jdbc/issues/43), if you are using the Oracle JDBC driver (ojdbc6.jar), the correct `jdbc_driver_class` is `"Java::oracle.jdbc.driver.OracleDriver"`.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `jdbc_driver_library` [plugins-filters-jdbc_static-jdbc_driver_library]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC driver library path to third-party driver library. Use comma separated paths in one string if you need multiple libraries.
|
||||
|
||||
If the driver class is not provided, the plugin looks for it in the Logstash Java classpath.
|
||||
|
||||
|
||||
### `jdbc_password` [plugins-filters-jdbc_static-jdbc_password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC password.
|
||||
|
||||
|
||||
### `jdbc_user` [plugins-filters-jdbc_static-jdbc_user]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC user.
|
||||
|
||||
|
||||
### `tag_on_default_use` [plugins-filters-jdbc_static-tag_on_default_use]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_jdbcstaticdefaultsused"]`
|
||||
|
||||
Append values to the `tags` field if no record was found and default values were used.
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-jdbc_static-tag_on_failure]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_jdbcstaticfailure"]`
|
||||
|
||||
Append values to the `tags` field if a SQL error occurred.
|
||||
|
||||
|
||||
### `staging_directory` [plugins-filters-jdbc_static-staging_directory]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is derived from the Ruby temp directory + plugin_name + "import_data"
|
||||
* e.g. `"/tmp/logstash/jdbc_static/import_data"`
|
||||
|
||||
The directory used stage the data for bulk loading, there should be sufficient disk space to handle the data you wish to use to enrich events. Previous versions of this plugin did not handle loading datasets of more than several thousand rows well due to an open bug in Apache Derby. This setting introduces an alternative way of loading large recordsets. As each row is received it is spooled to file and then that file is imported using a system *import table* system call.
|
||||
|
||||
Append values to the `tags` field if a SQL error occurred.
|
||||
|
||||
|
||||
### `loader_schedule` [plugins-filters-jdbc_static-loader_schedule]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
You can schedule remote loading to run periodically according to a specific schedule. This scheduling syntax is powered by [rufus-scheduler](https://github.com/jmettraux/rufus-scheduler). The syntax is cron-like with some extensions specific to Rufus (for example, timezone support). For more about this syntax, see [parsing cronlines and time strings](https://github.com/jmettraux/rufus-scheduler#parsing-cronlines-and-time-strings).
|
||||
|
||||
Examples:
|
||||
|
||||
| | |
|
||||
| --- | --- |
|
||||
| `*/30 * * * *` | will execute on the 0th and 30th minute of every hour every day. |
|
||||
| `* 5 * 1-3 *` | will execute every minute of 5am every day of January through March. |
|
||||
| `0 * * * *` | will execute on the 0th minute of every hour every day. |
|
||||
| `0 6 * * * America/Chicago` | will execute at 6:00am (UTC/GMT -5) every day. |
|
||||
|
||||
Debugging using the Logstash interactive shell:
|
||||
|
||||
```shell
|
||||
bin/logstash -i irb
|
||||
irb(main):001:0> require 'rufus-scheduler'
|
||||
=> true
|
||||
irb(main):002:0> Rufus::Scheduler.parse('*/10 * * * *')
|
||||
=> #<Rufus::Scheduler::CronLine:0x230f8709 @timezone=nil, @weekdays=nil, @days=nil, @seconds=[0], @minutes=[0, 10, 20, 30, 40, 50], @hours=nil, @months=nil, @monthdays=nil, @original="*/10 * * * *">
|
||||
irb(main):003:0> exit
|
||||
```
|
||||
|
||||
The object returned by the above call, an instance of `Rufus::Scheduler::CronLine` shows the seconds, minutes etc. of execution.
|
||||
|
||||
|
||||
### `loaders` [plugins-filters-jdbc_static-loaders]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
The array should contain one or more Hashes. Each Hash is validated according to the table below.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| id | string | No |
|
||||
| local_table | string | Yes |
|
||||
| query | string | Yes |
|
||||
| max_rows | number | No |
|
||||
| jdbc_connection_string | string | No |
|
||||
| jdbc_driver_class | string | No |
|
||||
| jdbc_driver_library | a valid filesystem path | No |
|
||||
| jdbc_password | password | No |
|
||||
| jdbc_user | string | No |
|
||||
|
||||
**Loader Field Descriptions:**
|
||||
|
||||
id
|
||||
: An optional identifier. This is used to identify the loader that is generating error messages and log lines.
|
||||
|
||||
local_table
|
||||
: The destination table in the local lookup database that the loader will fill.
|
||||
|
||||
query
|
||||
: The SQL statement that is executed to fetch the remote records. Use SQL aliases and casts to ensure that the record’s columns and datatype match the table structure in the local database as defined in the `local_db_objects`.
|
||||
|
||||
max_rows
|
||||
: The default for this setting is 1 million. Because the lookup database is in-memory, it will take up JVM heap space. If the query returns many millions of rows, you should increase the JVM memory given to Logstash or limit the number of rows returned, perhaps to those most frequently found in the event data.
|
||||
|
||||
jdbc_connection_string
|
||||
: If not set in a loader, this setting defaults to the plugin-level `jdbc_connection_string` setting.
|
||||
|
||||
jdbc_driver_class
|
||||
: If not set in a loader, this setting defaults to the plugin-level `jdbc_driver_class` setting.
|
||||
|
||||
jdbc_driver_library
|
||||
: If not set in a loader, this setting defaults to the plugin-level `jdbc_driver_library` setting.
|
||||
|
||||
jdbc_password
|
||||
: If not set in a loader, this setting defaults to the plugin-level `jdbc_password` setting.
|
||||
|
||||
jdbc_user
|
||||
: If not set in a loader, this setting defaults to the plugin-level `jdbc_user` setting.
|
||||
|
||||
|
||||
### `local_db_objects` [plugins-filters-jdbc_static-local_db_objects]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
The array should contain one or more Hashes. Each Hash represents a table schema for the local lookups database. Each Hash is validated according to the table below.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| name | string | Yes |
|
||||
| columns | array | Yes |
|
||||
| index_columns | number | No |
|
||||
| preserve_existing | boolean | No |
|
||||
|
||||
**Local_db_objects Field Descriptions:**
|
||||
|
||||
name
|
||||
: The name of the table to be created in the database.
|
||||
|
||||
columns
|
||||
: An array of column specifications. Each column specification is an array of exactly two elements, for example `["ip", "varchar(15)"]`. The first element is the column name string. The second element is a string that is an [Apache Derby SQL type](https://db.apache.org/derby/docs/10.14/ref/crefsqlj31068.html). The string content is checked when the local lookup tables are built, not when the settings are validated. Therefore, any misspelled SQL type strings result in errors.
|
||||
|
||||
index_columns
|
||||
: An array of strings. Each string must be defined in the `columns` setting. The index name will be generated internally. Unique or sorted indexes are not supported.
|
||||
|
||||
preserve_existing
|
||||
: This setting, when `true`, checks whether the table already exists in the local lookup database. If you have multiple pipelines running in the same instance of Logstash, and more than one pipeline is using this plugin, then you must read the important multiple pipeline notice at the top of the page.
|
||||
|
||||
|
||||
### `local_lookups` [plugins-filters-jdbc_static-local_lookups]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
The array should contain one or more Hashes. Each Hash represents a lookup enrichment. Each Hash is validated according to the table below.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| id | string | No |
|
||||
| query | string | Yes |
|
||||
| parameters | hash | Yes |
|
||||
| target | string | No |
|
||||
| default_hash | hash | No |
|
||||
| tag_on_failure | string | No |
|
||||
| tag_on_default_use | string | No |
|
||||
|
||||
**Local_lookups Field Descriptions:**
|
||||
|
||||
id
|
||||
: An optional identifier. This is used to identify the lookup that is generating error messages and log lines. If you omit this setting then a default id is used instead.
|
||||
|
||||
query
|
||||
: A SQL SELECT statement that is executed to achieve the lookup. To use parameters, use named parameter syntax, for example `"SELECT * FROM MYTABLE WHERE ID = :id"`. Alternatively, the `?` sign can be used as a prepared statement parameter, in which case the `prepared_parameters` array is used to populate the values
|
||||
|
||||
parameters
|
||||
: A key/value Hash or dictionary. The key (LHS) is the text that is substituted for in the SQL statement `SELECT * FROM sensors WHERE reference = :p1`. The value (RHS) is the field name in your event. The plugin reads the value from this key out of the event and substitutes that value into the statement, for example, `parameters => { "p1" => "ref" }`. Quoting is automatic - you do not need to put quotes in the statement. Only use the field interpolation syntax on the RHS if you need to add a prefix/suffix or join two event field values together to build the substitution value. For example, imagine an IOT message that has an id and a location, and you have a table of sensors that have a column of `id-loc_id`. In this case your parameter hash would look like this: `parameters => { "p1" => "%{[id]}-%{[loc_id]}" }`.
|
||||
|
||||
prepared_parameters
|
||||
: An Array, where the position is related to the position of the `?` in the query syntax. The values of array follow the same semantic of `parameters`. If `prepared_parameters` is valorized the filter is forced to use JDBC’s prepared statement to query the local database. Prepared statements provides two benefits: one on the performance side, because avoid the DBMS to parse and compile the SQL expression for every call; the other benefit is on the security side, using prepared statements avoid SQL-injection attacks based on query string concatenation.
|
||||
|
||||
target
|
||||
: An optional name for the field that will receive the looked-up data. If you omit this setting then the `id` setting (or the default id) is used. The looked-up data, an array of results converted to Hashes, is never added to the root of the event. If you want to do this, you should use the `add_field` setting. This means that you are in full control of how the fields/values are put in the root of the event, for example, `add_field => { user_firstname => "%{[user][0][firstname]}" }` - where `[user]` is the target field, `[0]` is the first result in the array, and `[firstname]` is the key in the result hash.
|
||||
|
||||
default_hash
|
||||
: An optional hash that will be put in the target field array when the lookup returns no results. Use this setting if you need to ensure that later references in other parts of the config actually refer to something.
|
||||
|
||||
tag_on_failure
|
||||
: An optional string that overrides the plugin-level setting. This is useful when defining multiple lookups.
|
||||
|
||||
tag_on_default_use
|
||||
: An optional string that overrides the plugin-level setting. This is useful when defining multiple lookups.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-jdbc_static-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-jdbc_static-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-jdbc_static-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-jdbc_static-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-jdbc_static-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-jdbc_static-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-jdbc_static-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-jdbc_static-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-jdbc_static-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_static {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
jdbc_static {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-jdbc_static-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_static {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
jdbc_static {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-jdbc_static-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-jdbc_static-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 jdbc_static filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_static {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-jdbc_static-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-jdbc_static-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_static {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
jdbc_static {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-jdbc_static-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_static {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
jdbc_static {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,466 +0,0 @@
|
|||
---
|
||||
navigation_title: "jdbc_streaming"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html
|
||||
---
|
||||
|
||||
# Jdbc_streaming filter plugin [plugins-filters-jdbc_streaming]
|
||||
|
||||
|
||||
* A component of the [jdbc integration plugin](/reference/plugins-integrations-jdbc.md)
|
||||
* Integration version: v5.5.2
|
||||
* Released on: 2024-12-23
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-integration-jdbc/blob/v5.5.2/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-jdbc_streaming-index.md).
|
||||
|
||||
## Getting help [_getting_help_148]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-integration-jdbc). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_147]
|
||||
|
||||
This filter executes a SQL query and stores the result set in the field specified as `target`. It will cache the results locally in an LRU cache with expiry.
|
||||
|
||||
For example, you can load a row based on an id in the event.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
jdbc_driver_library => "/path/to/mysql-connector-java-5.1.34-bin.jar"
|
||||
jdbc_driver_class => "com.mysql.jdbc.Driver"
|
||||
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase"
|
||||
jdbc_user => "me"
|
||||
jdbc_password => "secret"
|
||||
statement => "select * from WORLD.COUNTRY WHERE Code = :code"
|
||||
parameters => { "code" => "country_code"}
|
||||
target => "country_details"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Prepared Statements [plugins-filters-jdbc_streaming-prepared_statements]
|
||||
|
||||
Using server side prepared statements can speed up execution times as the server optimises the query plan and execution.
|
||||
|
||||
::::{note}
|
||||
Not all JDBC accessible technologies will support prepared statements.
|
||||
::::
|
||||
|
||||
|
||||
With the introduction of Prepared Statement support comes a different code execution path and some new settings. Most of the existing settings are still useful but there are several new settings for Prepared Statements to read up on.
|
||||
|
||||
Use the boolean setting `use_prepared_statements` to enable this execution mode.
|
||||
|
||||
Use the `prepared_statement_name` setting to specify a name for the Prepared Statement, this identifies the prepared statement locally and remotely and it should be unique in your config and on the database.
|
||||
|
||||
Use the `prepared_statement_bind_values` array setting to specify the bind values. Typically, these values are indirectly extracted from your event, i.e. the string in the array refers to a field name in your event. You can also use constant values like numbers or strings but ensure that any string constants (e.g. a locale constant of "en" or "de") is not also an event field name. It is a good idea to use the bracketed field reference syntax for fields and normal strings for constants, e.g. `prepared_statement_bind_values => ["[src_ip]", "tokyo"],`.
|
||||
|
||||
There are 3 possible parameter schemes. Interpolated, field references and constants. Use interpolation when you are prefixing, suffixing or concatenating field values to create a value that exists in your database, e.g. `%{{username}}@%{{domain}}` → `"alice@example.org"`, `%{{distance}}km` → "42km". Use field references for exact field values e.g. "[srcip]" → "192.168.1.2". Use constants when a database column holds values that slice or categorise a number of similar records e.g. language translations.
|
||||
|
||||
A boolean setting `prepared_statement_warn_on_constant_usage`, defaulting to true, controls whether you will see a WARN message logged that warns when constants could be missing the bracketed field reference syntax. If you have set your field references and constants correctly you should set `prepared_statement_warn_on_constant_usage` to false. This setting and code checks should be deprecated in a future major Logstash release.
|
||||
|
||||
The `statement` (or `statement_path`) setting still holds the SQL statement but to use bind variables you must use the `?` character as a placeholder in the exact order found in the `prepared_statement_bind_values` array. Some technologies may require connection string properties to be set, see MySQL example below.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
jdbc_driver_library => "/path/to/mysql-connector-java-5.1.34-bin.jar"
|
||||
jdbc_driver_class => "com.mysql.jdbc.Driver"
|
||||
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase?cachePrepStmts=true&prepStmtCacheSize=250&prepStmtCacheSqlLimit=2048&useServerPrepStmts=true"
|
||||
jdbc_user => "me"
|
||||
jdbc_password => "secret"
|
||||
statement => "select * from WORLD.COUNTRY WHERE Code = ?"
|
||||
use_prepared_statements => true
|
||||
prepared_statement_name => "lookup_country_info"
|
||||
prepared_statement_bind_values => ["[country_code]"]
|
||||
target => "country_details"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Jdbc_streaming Filter Configuration Options [plugins-filters-jdbc_streaming-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-jdbc_streaming-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`cache_expiration`](#plugins-filters-jdbc_streaming-cache_expiration) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`cache_size`](#plugins-filters-jdbc_streaming-cache_size) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`default_hash`](#plugins-filters-jdbc_streaming-default_hash) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`jdbc_connection_string`](#plugins-filters-jdbc_streaming-jdbc_connection_string) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`jdbc_driver_class`](#plugins-filters-jdbc_streaming-jdbc_driver_class) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`jdbc_driver_library`](#plugins-filters-jdbc_streaming-jdbc_driver_library) | a valid filesystem path | No |
|
||||
| [`jdbc_password`](#plugins-filters-jdbc_streaming-jdbc_password) | [password](/reference/configuration-file-structure.md#password) | No |
|
||||
| [`jdbc_user`](#plugins-filters-jdbc_streaming-jdbc_user) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`jdbc_validate_connection`](#plugins-filters-jdbc_streaming-jdbc_validate_connection) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`jdbc_validation_timeout`](#plugins-filters-jdbc_streaming-jdbc_validation_timeout) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`parameters`](#plugins-filters-jdbc_streaming-parameters) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`prepared_statement_bind_values`](#plugins-filters-jdbc_streaming-prepared_statement_bind_values) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`prepared_statement_name`](#plugins-filters-jdbc_streaming-prepared_statement_name) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`prepared_statement_warn_on_constant_usage`](#plugins-filters-jdbc_streaming-prepared_statement_warn_on_constant_usage) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`sequel_opts`](#plugins-filters-jdbc_streaming-sequel_opts) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`statement`](#plugins-filters-jdbc_streaming-statement) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`tag_on_default_use`](#plugins-filters-jdbc_streaming-tag_on_default_use) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`tag_on_failure`](#plugins-filters-jdbc_streaming-tag_on_failure) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`target`](#plugins-filters-jdbc_streaming-target) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`use_cache`](#plugins-filters-jdbc_streaming-use_cache) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`use_prepared_statements`](#plugins-filters-jdbc_streaming-use_prepared_statements) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-jdbc_streaming-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `cache_expiration` [plugins-filters-jdbc_streaming-cache_expiration]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `5.0`
|
||||
|
||||
The minimum number of seconds any entry should remain in the cache. Defaults to 5 seconds.
|
||||
|
||||
A numeric value. You can use decimals for example: `cache_expiration => 0.25`. If there are transient jdbc errors, the cache will store empty results for a given parameter set and bypass the jbdc lookup. This will merge the default_hash into the event until the cache entry expires. Then the jdbc lookup will be tried again for the same parameters. Conversely, while the cache contains valid results, any external problem that would cause jdbc errors will not be noticed for the cache_expiration period.
|
||||
|
||||
|
||||
### `cache_size` [plugins-filters-jdbc_streaming-cache_size]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `500`
|
||||
|
||||
The maximum number of cache entries that will be stored. Defaults to 500 entries. The least recently used entry will be evicted.
|
||||
|
||||
|
||||
### `default_hash` [plugins-filters-jdbc_streaming-default_hash]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
Define a default object to use when lookup fails to return a matching row. Ensure that the key names of this object match the columns from the statement.
|
||||
|
||||
|
||||
### `jdbc_connection_string` [plugins-filters-jdbc_streaming-jdbc_connection_string]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC connection string
|
||||
|
||||
|
||||
### `jdbc_driver_class` [plugins-filters-jdbc_streaming-jdbc_driver_class]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC driver class to load, for example "oracle.jdbc.OracleDriver" or "org.apache.derby.jdbc.ClientDriver"
|
||||
|
||||
|
||||
### `jdbc_driver_library` [plugins-filters-jdbc_streaming-jdbc_driver_library]
|
||||
|
||||
* Value type is [path](/reference/configuration-file-structure.md#path)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC driver library path to third party driver library.
|
||||
|
||||
|
||||
### `jdbc_password` [plugins-filters-jdbc_streaming-jdbc_password]
|
||||
|
||||
* Value type is [password](/reference/configuration-file-structure.md#password)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC password
|
||||
|
||||
|
||||
### `jdbc_user` [plugins-filters-jdbc_streaming-jdbc_user]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
JDBC user
|
||||
|
||||
|
||||
### `jdbc_validate_connection` [plugins-filters-jdbc_streaming-jdbc_validate_connection]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Connection pool configuration. Validate connection before use.
|
||||
|
||||
|
||||
### `jdbc_validation_timeout` [plugins-filters-jdbc_streaming-jdbc_validation_timeout]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `3600`
|
||||
|
||||
Connection pool configuration. How often to validate a connection (in seconds).
|
||||
|
||||
|
||||
### `parameters` [plugins-filters-jdbc_streaming-parameters]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
Hash of query parameter, for example `{ "id" => "id_field" }`.
|
||||
|
||||
|
||||
### `prepared_statement_bind_values` [plugins-filters-jdbc_streaming-prepared_statement_bind_values]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
Array of bind values for the prepared statement. Use field references and constants. See the section on [prepared_statements](#plugins-filters-jdbc_streaming-prepared_statements) for more info.
|
||||
|
||||
|
||||
### `prepared_statement_name` [plugins-filters-jdbc_streaming-prepared_statement_name]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `""`
|
||||
|
||||
Name given to the prepared statement. It must be unique in your config and in the database. You need to supply this if `use_prepared_statements` is true.
|
||||
|
||||
|
||||
### `prepared_statement_warn_on_constant_usage` [plugins-filters-jdbc_streaming-prepared_statement_warn_on_constant_usage]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
A flag that controls whether a warning is logged if, in `prepared_statement_bind_values`, a String constant is detected that might be intended as a field reference.
|
||||
|
||||
|
||||
### `sequel_opts` [plugins-filters-jdbc_streaming-sequel_opts]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
General/Vendor-specific Sequel configuration options
|
||||
|
||||
An example of an optional connection pool configuration max_connections - The maximum number of connections the connection pool
|
||||
|
||||
examples of vendor-specific options can be found in this documentation page: [https://github.com/jeremyevans/sequel/blob/master/doc/opening_databases.rdoc](https://github.com/jeremyevans/sequel/blob/master/doc/opening_databases.rdoc)
|
||||
|
||||
|
||||
### `statement` [plugins-filters-jdbc_streaming-statement]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Statement to execute. To use parameters, use named parameter syntax, for example "SELECT * FROM MYTABLE WHERE ID = :id".
|
||||
|
||||
|
||||
### `tag_on_default_use` [plugins-filters-jdbc_streaming-tag_on_default_use]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_jdbcstreamingdefaultsused"]`
|
||||
|
||||
Append values to the `tags` field if no record was found and default values were used.
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-jdbc_streaming-tag_on_failure]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_jdbcstreamingfailure"]`
|
||||
|
||||
Append values to the `tags` field if sql error occurred.
|
||||
|
||||
|
||||
### `target` [plugins-filters-jdbc_streaming-target]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define the target field to store the extracted result(s). Field is overwritten if exists.
|
||||
|
||||
|
||||
### `use_cache` [plugins-filters-jdbc_streaming-use_cache]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Enable or disable caching, boolean true or false. Defaults to true.
|
||||
|
||||
|
||||
### `use_prepared_statements` [plugins-filters-jdbc_streaming-use_prepared_statements]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
When set to `true`, enables prepare statement usage
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-jdbc_streaming-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-jdbc_streaming-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-jdbc_streaming-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-jdbc_streaming-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-jdbc_streaming-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-jdbc_streaming-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-jdbc_streaming-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-jdbc_streaming-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-jdbc_streaming-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-jdbc_streaming-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-jdbc_streaming-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-jdbc_streaming-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 jdbc_streaming filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-jdbc_streaming-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-jdbc_streaming-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-jdbc_streaming-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
jdbc_streaming {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,305 +0,0 @@
|
|||
---
|
||||
navigation_title: "json"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
|
||||
---
|
||||
|
||||
# JSON filter plugin [plugins-filters-json]
|
||||
|
||||
|
||||
* Plugin version: v3.2.1
|
||||
* Released on: 2023-12-18
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-json/blob/v3.2.1/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-json-index.md).
|
||||
|
||||
## Getting help [_getting_help_149]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-json). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_148]
|
||||
|
||||
This is a JSON parsing filter. It takes an existing field which contains JSON and expands it into an actual data structure within the Logstash event.
|
||||
|
||||
By default, it will place the parsed JSON in the root (top level) of the Logstash event, but this filter can be configured to place the JSON into any arbitrary event field, using the `target` configuration.
|
||||
|
||||
This plugin has a few fallback scenarios when something bad happens during the parsing of the event. If the JSON parsing fails on the data, the event will be untouched and it will be tagged with `_jsonparsefailure`; you can then use conditionals to clean the data. You can configure this tag with the `tag_on_failure` option.
|
||||
|
||||
If the parsed data contains a `@timestamp` field, the plugin will try to use it for the events `@timestamp`, and if the parsing fails, the field will be renamed to `_@timestamp` and the event will be tagged with a `_timestampparsefailure`.
|
||||
|
||||
|
||||
## Event Metadata and the Elastic Common Schema (ECS) [plugins-filters-json-ecs_metadata]
|
||||
|
||||
The plugin behaves the same regardless of ECS compatibility, except giving a warning when ECS is enabled and `target` isn’t set.
|
||||
|
||||
::::{tip}
|
||||
Set the `target` option to avoid potential schema conflicts.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## JSON Filter Configuration Options [plugins-filters-json-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-json-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`ecs_compatibility`](#plugins-filters-json-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`skip_on_invalid_json`](#plugins-filters-json-skip_on_invalid_json) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`source`](#plugins-filters-json-source) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`tag_on_failure`](#plugins-filters-json-tag_on_failure) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`target`](#plugins-filters-json-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-json-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-filters-json-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: does not use ECS-compatible field names
|
||||
* `v1`: Elastic Common Schema compliant behavior (warns when `target` isn’t set)
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). See [Event Metadata and the Elastic Common Schema (ECS)](#plugins-filters-json-ecs_metadata) for detailed information.
|
||||
|
||||
|
||||
### `skip_on_invalid_json` [plugins-filters-json-skip_on_invalid_json]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Allows for skipping the filter on invalid JSON (this allows you to handle JSON and non-JSON data without warnings)
|
||||
|
||||
|
||||
### `source` [plugins-filters-json-source]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The configuration for the JSON filter:
|
||||
|
||||
```ruby
|
||||
source => source_field
|
||||
```
|
||||
|
||||
For example, if you have JSON data in the `message` field:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
json {
|
||||
source => "message"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above would parse the JSON from the `message` field.
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-json-tag_on_failure]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["_jsonparsefailure"]`
|
||||
|
||||
Append values to the `tags` field when there has been no successful match
|
||||
|
||||
|
||||
### `target` [plugins-filters-json-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Define the target field for placing the parsed data. If this setting is omitted, the JSON data will be stored at the root (top level) of the event.
|
||||
|
||||
For example, if you want the data to be put in the `doc` field:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
json {
|
||||
target => "doc"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
JSON in the value of the `source` field will be expanded into a data structure in the `target` field.
|
||||
|
||||
::::{note}
|
||||
if the `target` field already exists, it will be overwritten!
|
||||
::::
|
||||
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-json-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-json-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-json-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-json-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-json-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-json-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-json-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-json-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-json-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
json {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
json {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-json-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
json {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
json {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-json-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-json-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 json filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
json {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-json-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-json-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
json {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
json {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-json-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
json {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
json {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,243 +0,0 @@
|
|||
---
|
||||
navigation_title: "json_encode"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-json_encode.html
|
||||
---
|
||||
|
||||
# Json_encode filter plugin [plugins-filters-json_encode]
|
||||
|
||||
|
||||
* Plugin version: v3.0.3
|
||||
* Released on: 2017-11-07
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-json_encode/blob/v3.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-json_encode-index.md).
|
||||
|
||||
## Installation [_installation_62]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-json_encode`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_150]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-json_encode). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_149]
|
||||
|
||||
JSON encode filter. Takes a field and serializes it into JSON
|
||||
|
||||
If no target is specified, the source field is overwritten with the JSON text.
|
||||
|
||||
For example, if you have a field named `foo`, and you want to store the JSON encoded string in `bar`, do this:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
json_encode {
|
||||
source => "foo"
|
||||
target => "bar"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Json_encode Filter Configuration Options [plugins-filters-json_encode-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-json_encode-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`source`](#plugins-filters-json_encode-source) | [string](/reference/configuration-file-structure.md#string) | Yes |
|
||||
| [`target`](#plugins-filters-json_encode-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-json_encode-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `source` [plugins-filters-json_encode-source]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The field to convert to JSON.
|
||||
|
||||
|
||||
### `target` [plugins-filters-json_encode-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The field to write the JSON into. If not specified, the source field will be overwritten.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-json_encode-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-json_encode-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-json_encode-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-json_encode-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-json_encode-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-json_encode-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-json_encode-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-json_encode-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-json_encode-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
json_encode {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
json_encode {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-json_encode-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
json_encode {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
json_encode {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-json_encode-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-json_encode-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 json_encode filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
json_encode {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-json_encode-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-json_encode-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
json_encode {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
json_encode {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-json_encode-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
json_encode {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
json_encode {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,687 +0,0 @@
|
|||
---
|
||||
navigation_title: "kv"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html
|
||||
---
|
||||
|
||||
# Kv filter plugin [plugins-filters-kv]
|
||||
|
||||
|
||||
* Plugin version: v4.7.0
|
||||
* Released on: 2022-03-04
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-kv/blob/v4.7.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-kv-index.md).
|
||||
|
||||
## Getting help [_getting_help_151]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-kv). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_150]
|
||||
|
||||
This filter helps automatically parse messages (or specific event fields) which are of the `foo=bar` variety.
|
||||
|
||||
For example, if you have a log message which contains `ip=1.2.3.4 error=REFUSED`, you can parse those automatically by configuring:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv { }
|
||||
}
|
||||
```
|
||||
|
||||
The above will result in a message of `ip=1.2.3.4 error=REFUSED` having the fields:
|
||||
|
||||
* `ip: 1.2.3.4`
|
||||
* `error: REFUSED`
|
||||
|
||||
This is great for postfix, iptables, and other types of logs that tend towards `key=value` syntax.
|
||||
|
||||
You can configure any arbitrary strings to split your data on, in case your data is not structured using `=` signs and whitespace. For example, this filter can also be used to parse query parameters like `foo=bar&baz=fizz` by setting the `field_split` parameter to `&`.
|
||||
|
||||
|
||||
## Event Metadata and the Elastic Common Schema (ECS) [plugins-filters-kv-ecs_metadata]
|
||||
|
||||
The plugin behaves the same regardless of ECS compatibility, except giving a warning when ECS is enabled and `target` isn’t set.
|
||||
|
||||
::::{tip}
|
||||
Set the `target` option to avoid potential schema conflicts.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Kv Filter Configuration Options [plugins-filters-kv-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-kv-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`allow_duplicate_values`](#plugins-filters-kv-allow_duplicate_values) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`allow_empty_values`](#plugins-filters-kv-allow_empty_values) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`default_keys`](#plugins-filters-kv-default_keys) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`ecs_compatibility`](#plugins-filters-kv-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`exclude_keys`](#plugins-filters-kv-exclude_keys) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`field_split`](#plugins-filters-kv-field_split) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`field_split_pattern`](#plugins-filters-kv-field_split_pattern) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`include_brackets`](#plugins-filters-kv-include_brackets) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`include_keys`](#plugins-filters-kv-include_keys) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`prefix`](#plugins-filters-kv-prefix) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`recursive`](#plugins-filters-kv-recursive) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_char_key`](#plugins-filters-kv-remove_char_key) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`remove_char_value`](#plugins-filters-kv-remove_char_value) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`source`](#plugins-filters-kv-source) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target`](#plugins-filters-kv-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`tag_on_failure`](#plugins-filters-kv-tag_on_failure) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`tag_on_timeout`](#plugins-filters-kv-tag_on_timeout) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`timeout_millis`](#plugins-filters-kv-timeout_millis) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`transform_key`](#plugins-filters-kv-transform_key) | [string](/reference/configuration-file-structure.md#string), one of `["lowercase", "uppercase", "capitalize"]` | No |
|
||||
| [`transform_value`](#plugins-filters-kv-transform_value) | [string](/reference/configuration-file-structure.md#string), one of `["lowercase", "uppercase", "capitalize"]` | No |
|
||||
| [`trim_key`](#plugins-filters-kv-trim_key) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`trim_value`](#plugins-filters-kv-trim_value) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`value_split`](#plugins-filters-kv-value_split) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`value_split_pattern`](#plugins-filters-kv-value_split_pattern) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`whitespace`](#plugins-filters-kv-whitespace) | [string](/reference/configuration-file-structure.md#string), one of `["strict", "lenient"]` | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-kv-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `allow_duplicate_values` [plugins-filters-kv-allow_duplicate_values]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
A bool option for removing duplicate key/value pairs. When set to false, only one unique key/value pair will be preserved.
|
||||
|
||||
For example, consider a source like `from=me from=me`. `[from]` will map to an Array with two elements: `["me", "me"]`. To only keep unique key/value pairs, you could use this configuration:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
allow_duplicate_values => false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `allow_empty_values` [plugins-filters-kv-allow_empty_values]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
A bool option for explicitly including empty values. When set to true, empty values will be added to the event.
|
||||
|
||||
::::{note}
|
||||
Parsing empty values typically requires [`whitespace => strict`](#plugins-filters-kv-whitespace).
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `default_keys` [plugins-filters-kv-default_keys]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
A hash specifying the default keys and their values which should be added to the event in case these keys do not exist in the source field being parsed.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
default_keys => [ "from", "logstash@example.com",
|
||||
"to", "default@dev.null" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-filters-kv-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: does not use ECS-compatible field names
|
||||
* `v1`: Elastic Common Schema compliant behavior (warns when `target` isn’t set)
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). See [Event Metadata and the Elastic Common Schema (ECS)](#plugins-filters-kv-ecs_metadata) for detailed information.
|
||||
|
||||
|
||||
### `exclude_keys` [plugins-filters-kv-exclude_keys]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
An array specifying the parsed keys which should not be added to the event. By default no keys will be excluded.
|
||||
|
||||
For example, consider a source like `Hey, from=<abc>, to=def foo=bar`. To exclude `from` and `to`, but retain the `foo` key, you could use this configuration:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
exclude_keys => [ "from", "to" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `field_split` [plugins-filters-kv-field_split]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `" "`
|
||||
|
||||
A string of characters to use as single-character field delimiters for parsing out key-value pairs.
|
||||
|
||||
These characters form a regex character class and thus you must escape special regex characters like `[` or `]` using `\`.
|
||||
|
||||
**Example with URL Query Strings**
|
||||
|
||||
For example, to split out the args from a url query string such as `?pin=12345~0&d=123&e=foo@bar.com&oq=bobo&ss=12345`:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
field_split => "&?"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above splits on both `&` and `?` characters, giving you the following fields:
|
||||
|
||||
* `pin: 12345~0`
|
||||
* `d: 123`
|
||||
* `e: foo@bar.com`
|
||||
* `oq: bobo`
|
||||
* `ss: 12345`
|
||||
|
||||
|
||||
### `field_split_pattern` [plugins-filters-kv-field_split_pattern]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
A regex expression to use as field delimiter for parsing out key-value pairs. Useful to define multi-character field delimiters. Setting the `field_split_pattern` options will take precedence over the `field_split` option.
|
||||
|
||||
Note that you should avoid using captured groups in your regex and you should be cautious with lookaheads or lookbehinds and positional anchors.
|
||||
|
||||
For example, to split fields on a repetition of one or more colons `k1=v1:k2=v2::k3=v3:::k4=v4`:
|
||||
|
||||
```ruby
|
||||
filter { kv { field_split_pattern => ":+" } }
|
||||
```
|
||||
|
||||
To split fields on a regex character that need escaping like the plus sign `k1=v1++k2=v2++k3=v3++k4=v4`:
|
||||
|
||||
```ruby
|
||||
filter { kv { field_split_pattern => "\\+\\+" } }
|
||||
```
|
||||
|
||||
|
||||
### `include_brackets` [plugins-filters-kv-include_brackets]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
A boolean specifying whether to treat square brackets, angle brackets, and parentheses as value "wrappers" that should be removed from the value.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
include_brackets => true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For example, the result of this line: `bracketsone=(hello world) bracketstwo=[hello world] bracketsthree=<hello world>`
|
||||
|
||||
will be:
|
||||
|
||||
* bracketsone: hello world
|
||||
* bracketstwo: hello world
|
||||
* bracketsthree: hello world
|
||||
|
||||
instead of:
|
||||
|
||||
* bracketsone: (hello
|
||||
* bracketstwo: [hello
|
||||
* bracketsthree: <hello
|
||||
|
||||
|
||||
### `include_keys` [plugins-filters-kv-include_keys]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
An array specifying the parsed keys which should be added to the event. By default all keys will be added.
|
||||
|
||||
For example, consider a source like `Hey, from=<abc>, to=def foo=bar`. To include `from` and `to`, but exclude the `foo` key, you could use this configuration:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
include_keys => [ "from", "to" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `prefix` [plugins-filters-kv-prefix]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `""`
|
||||
|
||||
A string to prepend to all of the extracted keys.
|
||||
|
||||
For example, to prepend arg_ to all keys:
|
||||
|
||||
```ruby
|
||||
filter { kv { prefix => "arg_" } }
|
||||
```
|
||||
|
||||
|
||||
### `recursive` [plugins-filters-kv-recursive]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
A boolean specifying whether to drill down into values and recursively get more key-value pairs from it. The extra key-value pairs will be stored as subkeys of the root key.
|
||||
|
||||
Default is not to recursive values.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
recursive => "true"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `remove_char_key` [plugins-filters-kv-remove_char_key]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
A string of characters to remove from the key.
|
||||
|
||||
These characters form a regex character class and thus you must escape special regex characters like `[` or `]` using `\`.
|
||||
|
||||
Contrary to trim option, all characters are removed from the key, whatever their position.
|
||||
|
||||
For example, to remove `<` `>` `[` `]` and `,` characters from keys:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
remove_char_key => "<>\[\],"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `remove_char_value` [plugins-filters-kv-remove_char_value]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
A string of characters to remove from the value.
|
||||
|
||||
These characters form a regex character class and thus you must escape special regex characters like `[` or `]` using `\`.
|
||||
|
||||
Contrary to trim option, all characters are removed from the value, whatever their position.
|
||||
|
||||
For example, to remove `<`, `>`, `[`, `]` and `,` characters from values:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
remove_char_value => "<>\[\],"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `source` [plugins-filters-kv-source]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"message"`
|
||||
|
||||
The field to perform `key=value` searching on
|
||||
|
||||
For example, to process the `not_the_message` field:
|
||||
|
||||
```ruby
|
||||
filter { kv { source => "not_the_message" } }
|
||||
```
|
||||
|
||||
|
||||
### `target` [plugins-filters-kv-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The name of the container to put all of the key-value pairs into.
|
||||
|
||||
If this setting is omitted, fields will be written to the root of the event, as individual fields.
|
||||
|
||||
For example, to place all keys into the event field kv:
|
||||
|
||||
```ruby
|
||||
filter { kv { target => "kv" } }
|
||||
```
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-kv-tag_on_failure]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* The default value for this setting is [`_kv_filter_error`].
|
||||
|
||||
When a kv operation causes a runtime exception to be thrown within the plugin, the operation is safely aborted without crashing the plugin, and the event is tagged with the provided values.
|
||||
|
||||
|
||||
### `tag_on_timeout` [plugins-filters-kv-tag_on_timeout]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* The default value for this setting is `_kv_filter_timeout`.
|
||||
|
||||
When timeouts are enabled and a kv operation is aborted, the event is tagged with the provided value (see: [`timeout_millis`](#plugins-filters-kv-timeout_millis)).
|
||||
|
||||
|
||||
### `timeout_millis` [plugins-filters-kv-timeout_millis]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* The default value for this setting is 30000 (30 seconds).
|
||||
* Set to zero (`0`) to disable timeouts
|
||||
|
||||
Timeouts provide a safeguard against inputs that are pathological against the regular expressions that are used to extract key/value pairs. When parsing an event exceeds this threshold the operation is aborted and the event is tagged in order to prevent the operation from blocking the pipeline (see: [`tag_on_timeout`](#plugins-filters-kv-tag_on_timeout)).
|
||||
|
||||
|
||||
### `transform_key` [plugins-filters-kv-transform_key]
|
||||
|
||||
* Value can be any of: `lowercase`, `uppercase`, `capitalize`
|
||||
* There is no default value for this setting.
|
||||
|
||||
Transform keys to lower case, upper case or capitals.
|
||||
|
||||
For example, to lowercase all keys:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
transform_key => "lowercase"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `transform_value` [plugins-filters-kv-transform_value]
|
||||
|
||||
* Value can be any of: `lowercase`, `uppercase`, `capitalize`
|
||||
* There is no default value for this setting.
|
||||
|
||||
Transform values to lower case, upper case or capitals.
|
||||
|
||||
For example, to capitalize all values:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
transform_value => "capitalize"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `trim_key` [plugins-filters-kv-trim_key]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
A string of characters to trim from the key. This is useful if your keys are wrapped in brackets or start with space.
|
||||
|
||||
These characters form a regex character class and thus you must escape special regex characters like `[` or `]` using `\`.
|
||||
|
||||
Only leading and trailing characters are trimed from the key.
|
||||
|
||||
For example, to trim `<` `>` `[` `]` and `,` characters from keys:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
trim_key => "<>\[\],"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `trim_value` [plugins-filters-kv-trim_value]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Constants used for transform check A string of characters to trim from the value. This is useful if your values are wrapped in brackets or are terminated with commas (like postfix logs).
|
||||
|
||||
These characters form a regex character class and thus you must escape special regex characters like `[` or `]` using `\`.
|
||||
|
||||
Only leading and trailing characters are trimed from the value.
|
||||
|
||||
For example, to trim `<`, `>`, `[`, `]` and `,` characters from values:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
kv {
|
||||
trim_value => "<>\[\],"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `value_split` [plugins-filters-kv-value_split]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"="`
|
||||
|
||||
A non-empty string of characters to use as single-character value delimiters for parsing out key-value pairs.
|
||||
|
||||
These characters form a regex character class and thus you must escape special regex characters like `[` or `]` using `\`.
|
||||
|
||||
For example, to identify key-values such as `key1:value1 key2:value2`:
|
||||
|
||||
```ruby
|
||||
filter { kv { value_split => ":" } }
|
||||
```
|
||||
|
||||
|
||||
### `value_split_pattern` [plugins-filters-kv-value_split_pattern]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
A regex expression to use as value delimiter for parsing out key-value pairs. Useful to define multi-character value delimiters. Setting the `value_split_pattern` options will take precedence over the `value_split option`.
|
||||
|
||||
Note that you should avoid using captured groups in your regex and you should be cautious with lookaheads or lookbehinds and positional anchors.
|
||||
|
||||
See `field_split_pattern` for examples.
|
||||
|
||||
|
||||
### `whitespace` [plugins-filters-kv-whitespace]
|
||||
|
||||
* Value can be any of: `lenient`, `strict`
|
||||
* Default value is `lenient`
|
||||
|
||||
An option specifying whether to be *lenient* or *strict* with the acceptance of unnecessary whitespace surrounding the configured value-split sequence.
|
||||
|
||||
By default the plugin is run in `lenient` mode, which ignores spaces that occur before or after the value-splitter. While this allows the plugin to make reasonable guesses with most input, in some situations it may be too lenient.
|
||||
|
||||
You may want to enable `whitespace => strict` mode if you have control of the input data and can guarantee that no extra spaces are added surrounding the pattern you have defined for splitting values. Doing so will ensure that a *field-splitter* sequence immediately following a *value-splitter* will be interpreted as an empty field.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-kv-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-kv-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-kv-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-kv-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-kv-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-kv-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-kv-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-kv-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-kv-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
kv {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
kv {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-kv-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
kv {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
kv {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-kv-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-kv-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 kv filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
kv {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-kv-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-kv-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
kv {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
kv {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-kv-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
kv {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
kv {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,347 +0,0 @@
|
|||
---
|
||||
navigation_title: "memcached"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-memcached.html
|
||||
---
|
||||
|
||||
# Memcached filter plugin [plugins-filters-memcached]
|
||||
|
||||
|
||||
* Plugin version: v1.2.0
|
||||
* Released on: 2023-01-18
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-memcached/blob/v1.2.0/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-memcached-index.md).
|
||||
|
||||
## Getting help [_getting_help_152]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-memcached). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_151]
|
||||
|
||||
The Memcached filter provides integration with external data in Memcached.
|
||||
|
||||
It currently provides the following facilities:
|
||||
|
||||
* `get`: get values for one or more memcached keys and inject them into the event at the provided paths
|
||||
* `set`: set values from the event to the corresponding memcached keys
|
||||
|
||||
|
||||
## Examples [_examples_2]
|
||||
|
||||
This plugin enables key/value lookup enrichment against a Memcached object caching system. You can use this plugin to query for a value, and set it if not found.
|
||||
|
||||
### GET example [_get_example]
|
||||
|
||||
```txt
|
||||
memcached {
|
||||
hosts => ["localhost"]
|
||||
namespace => "convert_mm"
|
||||
get => {
|
||||
"%{millimeters}" => "[inches]"
|
||||
}
|
||||
add_tag => ["from_cache"]
|
||||
id => "memcached-get"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### SET example [_set_example]
|
||||
|
||||
```txt
|
||||
memcached {
|
||||
hosts => ["localhost"]
|
||||
namespace => "convert_mm"
|
||||
set => {
|
||||
"[inches]" => "%{millimeters}"
|
||||
}
|
||||
id => "memcached-set"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Memcached Filter Configuration Options [plugins-filters-memcached-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-memcached-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`hosts`](#plugins-filters-memcached-hosts) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`namespace`](#plugins-filters-memcached-namespace) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`get`](#plugins-filters-memcached-get) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`set`](#plugins-filters-memcached-set) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`tag_on_failure`](#plugins-filters-memcached-tag_on_failure) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`ttl`](#plugins-filters-memcached-ttl) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-memcached-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `hosts` [plugins-filters-memcached-hosts]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `localhost`
|
||||
|
||||
The `hosts` parameter accepts an array of addresses corresponding to memcached instances.
|
||||
|
||||
Hosts can be specified via FQDN (e.g., `example.com`), an IPV4 address (e.g., `123.45.67.89`), or an IPV6 address (e.g. `::1` or `2001:0db8:85a3:0000:0000:8a2e:0370:7334`). If your memcached host uses a non-standard port, the port can be specified by appending a colon (`:`) and the port number; to include a port with an IPv6 address, the address must first be wrapped in square-brackets (`[` and `]`), e.g., `[::1]:11211`.
|
||||
|
||||
If more than one host is specified, requests will be distributed to the given hosts using a modulus of the CRC-32 checksum of each key.
|
||||
|
||||
|
||||
### `namespace` [plugins-filters-memcached-namespace]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
If specified, prefix all memcached keys with the given string followed by a colon (`:`); this is useful if all keys being used by this plugin share a common prefix.
|
||||
|
||||
Example:
|
||||
|
||||
In the following configuration, we would GET `fruit:banana` and `fruit:apple` from memcached:
|
||||
|
||||
```
|
||||
filter {
|
||||
memcached {
|
||||
namespace => "fruit"
|
||||
get => {
|
||||
"banana" => "[fruit-stats][banana]"
|
||||
"apple" => "[fruit-stats][apple]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `get` [plugins-filters-memcached-get]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
If specified, get the values for the given keys from memcached, and store them in the corresponding fields on the event.
|
||||
|
||||
* keys are interpolated (e.g., if the event has a field `foo` with value `bar`, the key `sand/%{{foo}}` will evaluate to `sand/bar`)
|
||||
* fields can be nested references
|
||||
|
||||
```
|
||||
filter {
|
||||
memcached {
|
||||
get => {
|
||||
"memcached-key-1" => "field1"
|
||||
"memcached-key-2" => "[nested][field2]"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `set` [plugins-filters-memcached-set]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
If specified, extracts the values from the given event fields, and sets the corresponding keys to those values in memcached with the configured [ttl](#plugins-filters-memcached-ttl)
|
||||
|
||||
* keys are interpolated (e.g., if the event has a field `foo` with value `bar`, the key `sand/%{{foo}}` will evaluate to `sand/bar`)
|
||||
* fields can be nested references
|
||||
|
||||
```
|
||||
filter {
|
||||
memcached {
|
||||
set => {
|
||||
"field1" => "memcached-key-1"
|
||||
"[nested][field2]" => "memcached-key-2"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-memcached-tag_on_failure]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* The default value for this setting is `_memcached_failure`.
|
||||
|
||||
When a memcached operation causes a runtime exception to be thrown within the plugin, the operation is safely aborted without crashing the plugin, and the event is tagged with the provided value.
|
||||
|
||||
|
||||
### `ttl` [plugins-filters-memcached-ttl]
|
||||
|
||||
For usages of this plugin that persist data to memcached (e.g., [`set`](#plugins-filters-memcached-set)), the time-to-live in seconds
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* The default value is `0` (no expiry)
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-memcached-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-memcached-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-memcached-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-memcached-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-memcached-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-memcached-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-memcached-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-memcached-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-memcached-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
memcached {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
memcached {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-memcached-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
memcached {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
memcached {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-memcached-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-memcached-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 memcached filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
memcached {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-memcached-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-memcached-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
memcached {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
memcached {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-memcached-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
memcached {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
memcached {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,280 +0,0 @@
|
|||
---
|
||||
navigation_title: "metricize"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-metricize.html
|
||||
---
|
||||
|
||||
# Metricize filter plugin [plugins-filters-metricize]
|
||||
|
||||
|
||||
* Plugin version: v3.0.3
|
||||
* Released on: 2017-11-07
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-metricize/blob/v3.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-metricize-index.md).
|
||||
|
||||
## Installation [_installation_63]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-metricize`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_153]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-metricize). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_152]
|
||||
|
||||
The metricize filter takes complex events containing a number of metrics and splits these up into multiple events, each holding a single metric.
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
Assume the following filter configuration:
|
||||
```
|
||||
```
|
||||
filter {
|
||||
metricize {
|
||||
metrics => [ "metric1", "metric2" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
Assuming the following event is passed in:
|
||||
```
|
||||
```
|
||||
{
|
||||
type => "type A"
|
||||
metric1 => "value1"
|
||||
metric2 => "value2"
|
||||
}
|
||||
```
|
||||
```
|
||||
This will result in the following 2 events being generated in addition to the original event:
|
||||
```
|
||||
```
|
||||
{ {
|
||||
type => "type A" type => "type A"
|
||||
metric => "metric1" metric => "metric2"
|
||||
value => "value1" value => "value2"
|
||||
} }
|
||||
```
|
||||
|
||||
## Metricize Filter Configuration Options [plugins-filters-metricize-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-metricize-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`drop_original_event`](#plugins-filters-metricize-drop_original_event) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`metric_field_name`](#plugins-filters-metricize-metric_field_name) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`metrics`](#plugins-filters-metricize-metrics) | [array](/reference/configuration-file-structure.md#array) | Yes |
|
||||
| [`value_field_name`](#plugins-filters-metricize-value_field_name) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-metricize-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `drop_original_event` [plugins-filters-metricize-drop_original_event]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Flag indicating whether the original event should be dropped or not.
|
||||
|
||||
|
||||
### `metric_field_name` [plugins-filters-metricize-metric_field_name]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"metric"`
|
||||
|
||||
Name of the field the metric name will be written to.
|
||||
|
||||
|
||||
### `metrics` [plugins-filters-metricize-metrics]
|
||||
|
||||
* This is a required setting.
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
A new matrics event will be created for each metric field in this list. All fields in this list will be removed from generated events.
|
||||
|
||||
|
||||
### `value_field_name` [plugins-filters-metricize-value_field_name]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"value"`
|
||||
|
||||
Name of the field the metric value will be written to.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-metricize-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-metricize-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-metricize-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-metricize-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-metricize-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-metricize-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-metricize-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-metricize-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-metricize-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
metricize {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
metricize {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-metricize-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
metricize {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
metricize {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-metricize-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-metricize-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 metricize filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
metricize {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-metricize-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-metricize-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
metricize {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
metricize {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-metricize-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
metricize {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
metricize {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,387 +0,0 @@
|
|||
---
|
||||
navigation_title: "metrics"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html
|
||||
---
|
||||
|
||||
# Metrics filter plugin [plugins-filters-metrics]
|
||||
|
||||
|
||||
* Plugin version: v4.0.7
|
||||
* Released on: 2021-01-20
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-metrics/blob/v4.0.7/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-metrics-index.md).
|
||||
|
||||
## Getting help [_getting_help_154]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-metrics). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_153]
|
||||
|
||||
The metrics filter is useful for aggregating metrics.
|
||||
|
||||
::::{important}
|
||||
Elasticsearch 2.0 no longer allows field names with dots. Version 3.0 of the metrics filter plugin changes behavior to use nested fields rather than dotted notation to avoid colliding with versions of Elasticsearch 2.0+. Please note the changes in the documentation (underscores and sub-fields used).
|
||||
::::
|
||||
|
||||
|
||||
For example, if you have a field `response` that is a http response code, and you want to count each kind of response, you can do this:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
metrics {
|
||||
meter => [ "http_%{response}" ]
|
||||
add_tag => "metric"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Metrics are flushed every 5 seconds by default or according to `flush_interval`. Metrics appear as new events in the event stream and go through any filters that occur after as well as outputs.
|
||||
|
||||
In general, you will want to add a tag to your metrics and have an output explicitly look for that tag.
|
||||
|
||||
The event that is flushed will include every *meter* and *timer* metric in the following way:
|
||||
|
||||
|
||||
## `meter` values [_meter_values]
|
||||
|
||||
For a `meter => "thing"` you will receive the following fields:
|
||||
|
||||
* "[thing][count]" - the total count of events
|
||||
* "[thing][rate_1m]" - the per-second event rate in a 1-minute sliding window
|
||||
* "[thing][rate_5m]" - the per-second event rate in a 5-minute sliding window
|
||||
* "[thing][rate_15m]" - the per-second event rate in a 15-minute sliding window
|
||||
|
||||
|
||||
## `timer` values [_timer_values]
|
||||
|
||||
For a `timer => { "thing" => "%{{duration}}" }` you will receive the following fields:
|
||||
|
||||
* "[thing][count]" - the total count of events
|
||||
* "[thing][rate_1m]" - the per-second average value in a 1-minute sliding window
|
||||
* "[thing][rate_5m]" - the per-second average value in a 5-minute sliding window
|
||||
* "[thing][rate_15m]" - the per-second average value in a 15-minute sliding window
|
||||
* "[thing][min]" - the minimum value seen for this metric
|
||||
* "[thing][max]" - the maximum value seen for this metric
|
||||
* "[thing][stddev]" - the standard deviation for this metric
|
||||
* "[thing][mean]" - the mean for this metric
|
||||
* "[thing][pXX]" - the XXth percentile for this metric (see `percentiles`)
|
||||
|
||||
The default lengths of the event rate window (1, 5, and 15 minutes) can be configured with the `rates` option.
|
||||
|
||||
|
||||
## Example: Computing event rate [_example_computing_event_rate]
|
||||
|
||||
For a simple example, let’s track how many events per second are running through logstash:
|
||||
|
||||
```ruby
|
||||
input {
|
||||
generator {
|
||||
type => "generated"
|
||||
}
|
||||
}
|
||||
|
||||
filter {
|
||||
if [type] == "generated" {
|
||||
metrics {
|
||||
meter => "events"
|
||||
add_tag => "metric"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
# only emit events with the 'metric' tag
|
||||
if "metric" in [tags] {
|
||||
stdout {
|
||||
codec => line {
|
||||
format => "rate: %{[events][rate_1m]}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Running the above:
|
||||
|
||||
```ruby
|
||||
% bin/logstash -f example.conf
|
||||
rate: 23721.983566819246
|
||||
rate: 24811.395722536377
|
||||
rate: 25875.892745934525
|
||||
rate: 26836.42375967113
|
||||
```
|
||||
|
||||
We see the output includes our events' 1-minute rate.
|
||||
|
||||
In the real world, you would emit this to graphite or another metrics store, like so:
|
||||
|
||||
```ruby
|
||||
output {
|
||||
graphite {
|
||||
metrics => [ "events.rate_1m", "%{[events][rate_1m]}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Metrics Filter Configuration Options [plugins-filters-metrics-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-metrics-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`clear_interval`](#plugins-filters-metrics-clear_interval) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`flush_interval`](#plugins-filters-metrics-flush_interval) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`ignore_older_than`](#plugins-filters-metrics-ignore_older_than) | [number](/reference/configuration-file-structure.md#number) | No |
|
||||
| [`meter`](#plugins-filters-metrics-meter) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`percentiles`](#plugins-filters-metrics-percentiles) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`rates`](#plugins-filters-metrics-rates) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`timer`](#plugins-filters-metrics-timer) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-metrics-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `clear_interval` [plugins-filters-metrics-clear_interval]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `-1`
|
||||
|
||||
The clear interval, when all counters are reset.
|
||||
|
||||
If set to -1, the default value, the metrics will never be cleared. Otherwise, should be a multiple of 5s.
|
||||
|
||||
|
||||
### `flush_interval` [plugins-filters-metrics-flush_interval]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `5`
|
||||
|
||||
The flush interval, when the metrics event is created. Must be a multiple of 5s.
|
||||
|
||||
|
||||
### `ignore_older_than` [plugins-filters-metrics-ignore_older_than]
|
||||
|
||||
* Value type is [number](/reference/configuration-file-structure.md#number)
|
||||
* Default value is `0`
|
||||
|
||||
Don’t track events that have `@timestamp` older than some number of seconds.
|
||||
|
||||
This is useful if you want to only include events that are near real-time in your metrics.
|
||||
|
||||
For example, to only count events that are within 10 seconds of real-time, you would do this:
|
||||
|
||||
```
|
||||
filter {
|
||||
metrics {
|
||||
meter => [ "hits" ]
|
||||
ignore_older_than => 10
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### `meter` [plugins-filters-metrics-meter]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
syntax: `meter => [ "name of metric", "name of metric" ]`
|
||||
|
||||
|
||||
### `percentiles` [plugins-filters-metrics-percentiles]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[1, 5, 10, 90, 95, 99, 100]`
|
||||
|
||||
The percentiles that should be measured and emitted for timer values.
|
||||
|
||||
|
||||
### `rates` [plugins-filters-metrics-rates]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[1, 5, 15]`
|
||||
|
||||
The rates that should be measured, in minutes. Possible values are 1, 5, and 15.
|
||||
|
||||
|
||||
### `timer` [plugins-filters-metrics-timer]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
syntax: `timer => [ "name of metric", "%{{time_value}}" ]`
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-metrics-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-metrics-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-metrics-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-metrics-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-metrics-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-metrics-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-metrics-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-metrics-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-metrics-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
metrics {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
metrics {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-metrics-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
metrics {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
metrics {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-metrics-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-metrics-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 metrics filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
metrics {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-metrics-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-metrics-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
metrics {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
metrics {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-metrics-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
metrics {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
metrics {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,600 +0,0 @@
|
|||
---
|
||||
navigation_title: "mutate"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html
|
||||
---
|
||||
|
||||
# Mutate filter plugin [plugins-filters-mutate]
|
||||
|
||||
|
||||
* Plugin version: v3.5.8
|
||||
* Released on: 2023-11-22
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-mutate/blob/v3.5.8/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-mutate-index.md).
|
||||
|
||||
## Getting help [_getting_help_155]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-mutate). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_154]
|
||||
|
||||
The mutate filter allows you to perform general mutations on fields. You can rename, replace, and modify fields in your events.
|
||||
|
||||
### Processing order [plugins-filters-mutate-proc_order]
|
||||
|
||||
Mutations in a config file are executed in this order:
|
||||
|
||||
* coerce
|
||||
* rename
|
||||
* update
|
||||
* replace
|
||||
* convert
|
||||
* gsub
|
||||
* uppercase
|
||||
* capitalize
|
||||
* lowercase
|
||||
* strip
|
||||
* split
|
||||
* join
|
||||
* merge
|
||||
* copy
|
||||
|
||||
::::{important}
|
||||
Each mutation must be in its own code block if the sequence of operations needs to be preserved.
|
||||
::::
|
||||
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
split => { "hostname" => "." }
|
||||
add_field => { "shortHostname" => "%{[hostname][0]}" }
|
||||
}
|
||||
|
||||
mutate {
|
||||
rename => {"shortHostname" => "hostname"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Mutate Filter Configuration Options [plugins-filters-mutate-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-mutate-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`convert`](#plugins-filters-mutate-convert) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`copy`](#plugins-filters-mutate-copy) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`gsub`](#plugins-filters-mutate-gsub) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`join`](#plugins-filters-mutate-join) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`lowercase`](#plugins-filters-mutate-lowercase) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`merge`](#plugins-filters-mutate-merge) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`coerce`](#plugins-filters-mutate-coerce) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`rename`](#plugins-filters-mutate-rename) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`replace`](#plugins-filters-mutate-replace) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`split`](#plugins-filters-mutate-split) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`strip`](#plugins-filters-mutate-strip) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`update`](#plugins-filters-mutate-update) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`uppercase`](#plugins-filters-mutate-uppercase) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`capitalize`](#plugins-filters-mutate-capitalize) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`tag_on_failure`](#plugins-filters-mutate-tag_on_failure) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-mutate-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `convert` [plugins-filters-mutate-convert]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Convert a field’s value to a different type, like turning a string to an integer. If the field value is an array, all members will be converted. If the field is a hash no action will be taken.
|
||||
|
||||
::::{admonition} Conversion insights
|
||||
:class: note
|
||||
|
||||
The values are converted using Ruby semantics. Be aware that using `float` and `float_eu` converts the value to a double-precision 64-bit IEEE 754 floating point decimal number. In order to maintain precision due to the conversion, you should use a `double` in the Elasticsearch mappings.
|
||||
|
||||
::::
|
||||
|
||||
|
||||
Valid conversion targets, and their expected behaviour with different inputs are:
|
||||
|
||||
* `integer`:
|
||||
|
||||
* strings are parsed; comma-separators are supported (e.g., the string `"1,000"` produces an integer with value of one thousand); when strings have decimal parts, they are *truncated*.
|
||||
* floats and decimals are *truncated* (e.g., `3.99` becomes `3`, `-2.7` becomes `-2`)
|
||||
* boolean true and boolean false are converted to `1` and `0` respectively
|
||||
|
||||
* `integer_eu`:
|
||||
|
||||
* same as `integer`, except string values support dot-separators and comma-decimals (e.g., `"1.000"` produces an integer with value of one thousand)
|
||||
|
||||
* `float`:
|
||||
|
||||
* integers are converted to floats
|
||||
* strings are parsed; comma-separators and dot-decimals are supported (e.g., `"1,000.5"` produces a float with value of one thousand and one half)
|
||||
* boolean true and boolean false are converted to `1.0` and `0.0` respectively
|
||||
|
||||
* `float_eu`:
|
||||
|
||||
* same as `float`, except string values support dot-separators and comma-decimals (e.g., `"1.000,5"` produces a float with value of one thousand and one half)
|
||||
|
||||
* `string`:
|
||||
|
||||
* all values are stringified and encoded with UTF-8
|
||||
|
||||
* `boolean`:
|
||||
|
||||
* integer 0 is converted to boolean `false`
|
||||
* integer 1 is converted to boolean `true`
|
||||
* float 0.0 is converted to boolean `false`
|
||||
* float 1.0 is converted to boolean `true`
|
||||
* strings `"true"`, `"t"`, `"yes"`, `"y"`, `"1"`and `"1.0"` are converted to boolean `true`
|
||||
* strings `"false"`, `"f"`, `"no"`, `"n"`, `"0"` and `"0.0"` are converted to boolean `false`
|
||||
* empty strings are converted to boolean `false`
|
||||
* all other values pass straight through without conversion and log a warning message
|
||||
* for arrays each value gets processed separately using rules above
|
||||
|
||||
|
||||
This plugin can convert multiple fields in the same document, see the example below.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
convert => {
|
||||
"fieldname" => "integer"
|
||||
"booleanfield" => "boolean"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `copy` [plugins-filters-mutate-copy]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Copy an existing field to another field. Existing target field will be overriden.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
copy => { "source_field" => "dest_field" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `gsub` [plugins-filters-mutate-gsub]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Match a regular expression against a field value and replace all matches with a replacement string. Only fields that are strings or arrays of strings are supported. For other kinds of fields no action will be taken.
|
||||
|
||||
This configuration takes an array consisting of 3 elements per field/substitution.
|
||||
|
||||
Be aware of escaping any backslash in the config file.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
gsub => [
|
||||
# replace all forward slashes with underscore
|
||||
"fieldname", "/", "_",
|
||||
# replace backslashes, question marks, hashes, and minuses
|
||||
# with a dot "."
|
||||
"fieldname2", "[\\?#-]", "."
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `join` [plugins-filters-mutate-join]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Join an array with a separator character or string. Does nothing on non-array fields.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
join => { "fieldname" => "," }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `lowercase` [plugins-filters-mutate-lowercase]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Convert a string to its lowercase equivalent.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
lowercase => [ "fieldname" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `merge` [plugins-filters-mutate-merge]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Merge two fields of arrays or hashes. String fields will be automatically be converted into an array, so:
|
||||
|
||||
::::{admonition}
|
||||
```
|
||||
`array` + `string` will work
|
||||
`string` + `string` will result in an 2 entry array in `dest_field`
|
||||
`array` and `hash` will not work
|
||||
```
|
||||
::::
|
||||
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
merge => { "dest_field" => "added_field" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `coerce` [plugins-filters-mutate-coerce]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Set the default value of a field that exists but is null
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
# Sets the default value of the 'field1' field to 'default_value'
|
||||
coerce => { "field1" => "default_value" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `rename` [plugins-filters-mutate-rename]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Rename one or more fields.
|
||||
|
||||
If the destination field already exists, its value is replaced.
|
||||
|
||||
If one of the source fields doesn’t exist, no action is performed for that field. (This is not considered an error; the `tag_on_failure` tag is not applied.)
|
||||
|
||||
When renaming multiple fields, the order of operations is not guaranteed.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
# Renames the 'HOSTORIP' field to 'client_ip'
|
||||
rename => { "HOSTORIP" => "client_ip" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `replace` [plugins-filters-mutate-replace]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Replace the value of a field with a new value, or add the field if it doesn’t already exist. The new value can include `%{{foo}}` strings to help you build a new value from other parts of the event.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
replace => { "message" => "%{source_host}: My new message" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `split` [plugins-filters-mutate-split]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Split a field to an array using a separator character or string. Only works on string fields.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
split => { "fieldname" => "," }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `strip` [plugins-filters-mutate-strip]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Strip whitespace from field. NOTE: this only works on leading and trailing whitespace.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
strip => ["field1", "field2"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `update` [plugins-filters-mutate-update]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Update an existing field with a new value. If the field does not exist, then no action will be taken.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
update => { "sample" => "My new message" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `uppercase` [plugins-filters-mutate-uppercase]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Convert a string to its uppercase equivalent.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
uppercase => [ "fieldname" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `capitalize` [plugins-filters-mutate-capitalize]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Convert a string to its capitalized equivalent.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
mutate {
|
||||
capitalize => [ "fieldname" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `tag_on_failure` [plugins-filters-mutate-tag_on_failure]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* The default value for this setting is `_mutate_error`
|
||||
|
||||
If a failure occurs during the application of this mutate filter, the rest of the operations are aborted and the provided tag is added to the event.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-mutate-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-mutate-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-mutate-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-mutate-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-mutate-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-mutate-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-mutate-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-mutate-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-mutate-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
mutate {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
mutate {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-mutate-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
mutate {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
mutate {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-mutate-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-mutate-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 mutate filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
mutate {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-mutate-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-mutate-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
mutate {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
mutate {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-mutate-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
mutate {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
mutate {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,318 +0,0 @@
|
|||
---
|
||||
navigation_title: "prune"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-prune.html
|
||||
---
|
||||
|
||||
# Prune filter plugin [plugins-filters-prune]
|
||||
|
||||
|
||||
* Plugin version: v3.0.4
|
||||
* Released on: 2019-09-12
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-prune/blob/v3.0.4/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-prune-index.md).
|
||||
|
||||
## Getting help [_getting_help_156]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-prune). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_155]
|
||||
|
||||
The prune filter is for removing fields from events based on whitelists or blacklist of field names or their values (names and values can also be regular expressions).
|
||||
|
||||
This can e.g. be useful if you have a [json](/reference/plugins-filters-json.md) or [kv](/reference/plugins-filters-kv.md) filter that creates a number of fields with names that you don’t necessarily know the names of beforehand, and you only want to keep a subset of them.
|
||||
|
||||
Usage help: To specify a exact field name or value use the regular expression syntax `^some_name_or_value$`. Example usage: Input data `{ "msg":"hello world", "msg_short":"hw" }`
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
prune {
|
||||
whitelist_names => [ "msg" ]
|
||||
}
|
||||
}
|
||||
Allows both `"msg"` and `"msg_short"` through.
|
||||
```
|
||||
|
||||
While:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
prune {
|
||||
whitelist_names => ["^msg$"]
|
||||
}
|
||||
}
|
||||
Allows only `"msg"` through.
|
||||
```
|
||||
|
||||
Logstash stores an event’s `tags` as a field which is subject to pruning. Remember to `whitelist_names => [ "^tags$" ]` to maintain `tags` after pruning or use `blacklist_values => [ "^tag_name$" ]` to eliminate a specific `tag`.
|
||||
|
||||
::::{note}
|
||||
This filter currently only support operations on top-level fields, i.e. whitelisting and blacklisting of subfields based on name or value does not work.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
## Prune Filter Configuration Options [plugins-filters-prune-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-prune-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`blacklist_names`](#plugins-filters-prune-blacklist_names) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`blacklist_values`](#plugins-filters-prune-blacklist_values) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`interpolate`](#plugins-filters-prune-interpolate) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`whitelist_names`](#plugins-filters-prune-whitelist_names) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`whitelist_values`](#plugins-filters-prune-whitelist_values) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-prune-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `blacklist_names` [plugins-filters-prune-blacklist_names]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["%{[^}]+}"]`
|
||||
|
||||
Exclude fields whose names match specified regexps, by default exclude unresolved `%{{field}}` strings.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
prune {
|
||||
blacklist_names => [ "method", "(referrer|status)", "${some}_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `blacklist_values` [plugins-filters-prune-blacklist_values]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
Exclude specified fields if their values match one of the supplied regular expressions. In case field values are arrays, each array item is matched against the regular expressions and matching array items will be excluded.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
prune {
|
||||
blacklist_values => [ "uripath", "/index.php",
|
||||
"method", "(HEAD|OPTIONS)",
|
||||
"status", "^[^2]" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `interpolate` [plugins-filters-prune-interpolate]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Trigger whether configuration fields and values should be interpolated for dynamic values (when resolving `%{{some_field}}`). Probably adds some performance overhead. Defaults to false.
|
||||
|
||||
|
||||
### `whitelist_names` [plugins-filters-prune-whitelist_names]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
Include only fields only if their names match specified regexps, default to empty list which means include everything.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
prune {
|
||||
whitelist_names => [ "method", "(referrer|status)", "${some}_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `whitelist_values` [plugins-filters-prune-whitelist_values]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
Include specified fields only if their values match one of the supplied regular expressions. In case field values are arrays, each array item is matched against the regular expressions and only matching array items will be included. By default all fields that are not listed in this setting are kept unless pruned by other settings.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
prune {
|
||||
whitelist_values => [ "uripath", "/index.php",
|
||||
"method", "(GET|POST)",
|
||||
"status", "^[^2]" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-prune-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-prune-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-prune-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-prune-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-prune-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-prune-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-prune-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-prune-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-prune-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
prune {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
prune {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-prune-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
prune {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
prune {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-prune-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-prune-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 prune filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
prune {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-prune-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-prune-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
prune {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
prune {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-prune-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
prune {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
prune {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,249 +0,0 @@
|
|||
---
|
||||
navigation_title: "range"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-range.html
|
||||
---
|
||||
|
||||
# Range filter plugin [plugins-filters-range]
|
||||
|
||||
|
||||
* Plugin version: v3.0.3
|
||||
* Released on: 2017-11-07
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-range/blob/v3.0.3/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-range-index.md).
|
||||
|
||||
## Installation [_installation_64]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-range`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Getting help [_getting_help_157]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-range). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_156]
|
||||
|
||||
This filter is used to check that certain fields are within expected size/length ranges. Supported types are numbers and strings. Numbers are checked to be within numeric value range. Strings are checked to be within string length range. More than one range can be specified for same fieldname, actions will be applied incrementally. When field value is within a specified range an action will be taken. Supported actions are drop event, add tag, or add field with specified value.
|
||||
|
||||
Example use cases are for histogram-like tagging of events or for finding anomaly values in fields or too big events that should be dropped.
|
||||
|
||||
|
||||
## Range Filter Configuration Options [plugins-filters-range-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-range-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`negate`](#plugins-filters-range-negate) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`ranges`](#plugins-filters-range-ranges) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-range-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `negate` [plugins-filters-range-negate]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Negate the range match logic, events should be outsize of the specified range to match.
|
||||
|
||||
|
||||
### `ranges` [plugins-filters-range-ranges]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
An array of field, min, max, action tuples. Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
range {
|
||||
ranges => [ "message", 0, 10, "tag:short",
|
||||
"message", 11, 100, "tag:medium",
|
||||
"message", 101, 1000, "tag:long",
|
||||
"message", 1001, 1e1000, "drop",
|
||||
"duration", 0, 100, "field:latency:fast",
|
||||
"duration", 101, 200, "field:latency:normal",
|
||||
"duration", 201, 1000, "field:latency:slow",
|
||||
"duration", 1001, 1e1000, "field:latency:outlier",
|
||||
"requests", 0, 10, "tag:too_few_%{host}_requests" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Supported actions are drop tag or field with specified value. Added tag names and field names and field values can have `%{{dynamic}}` values.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-range-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-range-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-range-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-range-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-range-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-range-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-range-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-range-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-range-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
range {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
range {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-range-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
range {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
range {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-range-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-range-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 range filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
range {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-range-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-range-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
range {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
range {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-range-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
range {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
range {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,404 +0,0 @@
|
|||
---
|
||||
navigation_title: "ruby"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-ruby.html
|
||||
---
|
||||
|
||||
# Ruby filter plugin [plugins-filters-ruby]
|
||||
|
||||
|
||||
* Plugin version: v3.1.8
|
||||
* Released on: 2022-01-24
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-ruby/blob/v3.1.8/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-ruby-index.md).
|
||||
|
||||
## Getting help [_getting_help_158]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-ruby). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_157]
|
||||
|
||||
Execute ruby code. This filter accepts inline ruby code or a ruby file. The two options are mutually exclusive and have slightly different ways of working, which are described below.
|
||||
|
||||
::::{note}
|
||||
This plugin’s concurrency-safety depends on your code. Be sure to read up on [how to avoid concurrency issues](#plugins-filters-ruby-concurrency).
|
||||
::::
|
||||
|
||||
|
||||
### Inline ruby code [plugins-filters-ruby-using-inline-script]
|
||||
|
||||
To add inline ruby in your filter, place all code in the `code` option. This code will be executed for every event the filter receives. You can also place ruby code in the `init` option. It will be executed only once during the plugin’s register phase.
|
||||
|
||||
For example, to cancel 90% of events, you can do this:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
ruby {
|
||||
# Cancel 90% of events
|
||||
code => "event.cancel if rand <= 0.90"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If you need to create additional events, you must use a specific syntax `new_event_block.call(event)` like in this example duplicating the input event
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
ruby {
|
||||
code => "new_event_block.call(event.clone)"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Defining methods in the [`code` option](#plugins-filters-ruby-code) can significantly reduce throughput. Use the [`init` option](#plugins-filters-ruby-init) instead.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### Using a Ruby script file [plugins-filters-ruby-using-script-file]
|
||||
|
||||
As the inline code can become complex and hard to structure inside of a text string in `code`, it’s then preferable to place the Ruby code in a .rb file, using the `path` option.
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
ruby {
|
||||
# Cancel 90% of events
|
||||
path => "/etc/logstash/drop_percentage.rb"
|
||||
script_params => { "percentage" => 0.9 }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The ruby script file should define the following methods:
|
||||
|
||||
* `register(params)`: An optional register method that receives the key/value hash passed in the `script_params` configuration option
|
||||
* `filter(event)`: A mandatory Ruby method that accepts a Logstash event and must return an array of events
|
||||
|
||||
Below is an example implementation of the `drop_percentage.rb` ruby script that drops a configurable percentage of events:
|
||||
|
||||
```ruby
|
||||
# the value of `params` is the value of the hash passed to `script_params`
|
||||
# in the logstash configuration
|
||||
def register(params)
|
||||
@drop_percentage = params["percentage"]
|
||||
end
|
||||
|
||||
# the filter method receives an event and must return a list of events.
|
||||
# Dropping an event means not including it in the return array,
|
||||
# while creating new ones only requires you to add a new instance of
|
||||
# LogStash::Event to the returned array
|
||||
def filter(event)
|
||||
if rand >= @drop_percentage
|
||||
return [event]
|
||||
else
|
||||
return [] # return empty array to cancel event
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
|
||||
### Testing the ruby script [_testing_the_ruby_script]
|
||||
|
||||
To validate the behaviour of the `filter` method you implemented, the Ruby filter plugin provides an inline test framework where you can assert expectations. The tests you define will run when the pipeline is created and will prevent it from starting if a test fails.
|
||||
|
||||
You can also verify if the tests pass using the logstash `-t` flag.
|
||||
|
||||
For example above, you can write at the bottom of the `drop_percentage.rb` ruby script the following test:
|
||||
|
||||
```ruby
|
||||
def register(params)
|
||||
# ..
|
||||
end
|
||||
|
||||
def filter(event)
|
||||
# ..
|
||||
end
|
||||
|
||||
test "drop percentage 100%" do
|
||||
parameters do
|
||||
{ "percentage" => 1 }
|
||||
end
|
||||
|
||||
in_event { { "message" => "hello" } }
|
||||
|
||||
expect("drops the event") do |events|
|
||||
events.size == 0
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
We can now test that the ruby script we’re using is implemented correctly:
|
||||
|
||||
```shell
|
||||
% bin/logstash -e "filter { ruby { path => '/etc/logstash/drop_percentage.rb' script_params => { 'drop_percentage' => 0.5 } } }" -t
|
||||
[2017-10-13T13:44:29,723][INFO ][logstash.filters.ruby.script] Test run complete {:script_path=>"/etc/logstash/drop_percentage.rb", :results=>{:passed=>1, :failed=>0, :errored=>0}}
|
||||
Configuration OK
|
||||
[2017-10-13T13:44:29,887][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Avoiding concurrency issues [plugins-filters-ruby-concurrency]
|
||||
|
||||
When events are flowing through a pipeline with multiple workers, a single shared instance of this filter may end up processing many events *simultaneously*. This means that your script needs to be written to avoid mutating shared state unless it is done in a thread-safe manner.
|
||||
|
||||
In Ruby, the name of a variable determines its scope. The following guidance may help you avoid *accidentally* mutating shared state:
|
||||
|
||||
* Freely use Local Variables, whose name begins with a lower-case letter or an underscore (`_`).
|
||||
|
||||
* Local Variables are available only to the individual event being processed, and are automatically cleaned up.
|
||||
|
||||
* Exercise caution when *modifying* Instance Variables, whose names begin with `@` followed by a lower-case letter or an underscore (`_`).
|
||||
|
||||
* Instance Variables are shared between *all* worker threads in this pipeline, which may be processing multiple events simultaneously.
|
||||
* It is safe to *set* Instance Variables in a [script](#plugins-filters-ruby-using-script-file)-defined `register` function or with [`init`](#plugins-filters-ruby-init), but they should not be modified while processing events unless safe-guarded by mutual exclusion.
|
||||
* Instance Variables are *not* persisted across pipeline restarts or plugin crashes.
|
||||
|
||||
* *Avoid* using variables whose scope is not limited to the plugin instance, as they can cause hard-to-debug problems that span beyond the individual plugin or pipeline:
|
||||
|
||||
* Class Variables: begin with `@@`.
|
||||
* Global Variables: begin with a `$`.
|
||||
* Constants: begin with a capital letter.
|
||||
|
||||
|
||||
|
||||
## Ruby Filter Configuration Options [plugins-filters-ruby-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-ruby-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`code`](#plugins-filters-ruby-code) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`init`](#plugins-filters-ruby-init) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`path`](#plugins-filters-ruby-path) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`script_params`](#plugins-filters-ruby-script_params) | [hash](/reference/configuration-file-structure.md#hash),{} | No |
|
||||
| [`tag_on_exception`](#plugins-filters-ruby-tag_on_exception) | [string](/reference/configuration-file-structure.md#string),_rubyexception | No |
|
||||
| [`tag_with_exception_message`](#plugins-filters-ruby-tag_with_exception_message) | [boolean](/reference/configuration-file-structure.md#boolean),_false | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-ruby-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `code` [plugins-filters-ruby-code]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
* This setting cannot be used together with `path`.
|
||||
|
||||
The code to execute for every event. You will have an `event` variable available that is the event itself. See the [Event API](/reference/event-api.md) for more information.
|
||||
|
||||
|
||||
### `init` [plugins-filters-ruby-init]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Any code to execute at logstash startup-time
|
||||
|
||||
|
||||
### `path` [plugins-filters-ruby-path]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
* This setting cannot be used together with `code`.
|
||||
|
||||
The path of the ruby script file that implements the `filter` method.
|
||||
|
||||
|
||||
### `script_params` [plugins-filters-ruby-script_params]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
A key/value hash with parameters that are passed to the register method of your ruby script file defined in `path`.
|
||||
|
||||
|
||||
### `tag_on_exception` [plugins-filters-ruby-tag_on_exception]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `_rubyexception`
|
||||
|
||||
Tag to add to events in case the ruby code (either inline or file based) causes an exception.
|
||||
|
||||
|
||||
### `tag_with_exception_message` [plugins-filters-ruby-tag_with_exception_message]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
If `true` adds a tag to the event that is the concatenation of `tag_with_exception_message` and the exception message.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-ruby-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-ruby-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-ruby-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-ruby-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-ruby-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-ruby-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-ruby-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-ruby-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-ruby-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
ruby {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
ruby {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-ruby-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
ruby {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
ruby {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-ruby-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-ruby-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 ruby filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
ruby {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-ruby-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-ruby-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
ruby {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
ruby {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-ruby-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
ruby {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
ruby {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,276 +0,0 @@
|
|||
---
|
||||
navigation_title: "sleep"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-sleep.html
|
||||
---
|
||||
|
||||
# Sleep filter plugin [plugins-filters-sleep]
|
||||
|
||||
|
||||
* Plugin version: v3.0.7
|
||||
* Released on: 2020-09-04
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-sleep/blob/v3.0.7/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-sleep-index.md).
|
||||
|
||||
## Getting help [_getting_help_159]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-sleep). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_158]
|
||||
|
||||
Sleep a given amount of time. This will cause logstash to stall for the given amount of time. This is useful for rate limiting, etc.
|
||||
|
||||
|
||||
## Sleep Filter Configuration Options [plugins-filters-sleep-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-sleep-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`every`](#plugins-filters-sleep-every) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`replay`](#plugins-filters-sleep-replay) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`time`](#plugins-filters-sleep-time) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-sleep-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `every` [plugins-filters-sleep-every]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `1`
|
||||
|
||||
Sleep on every N’th. This option is ignored in replay mode.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
sleep {
|
||||
time => "1" # Sleep 1 second
|
||||
every => 10 # on every 10th event
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### `replay` [plugins-filters-sleep-replay]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Enable replay mode.
|
||||
|
||||
Replay mode tries to sleep based on timestamps in each event.
|
||||
|
||||
The amount of time to sleep is computed by subtracting the previous event’s timestamp from the current event’s timestamp. This helps you replay events in the same timeline as original.
|
||||
|
||||
If you specify a `time` setting as well, this filter will use the `time` value as a speed modifier. For example, a `time` value of 2 will replay at double speed, while a value of 0.25 will replay at 1/4th speed.
|
||||
|
||||
For example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
sleep {
|
||||
time => 2
|
||||
replay => true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above will sleep in such a way that it will perform replay 2-times faster than the original time speed.
|
||||
|
||||
|
||||
### `time` [plugins-filters-sleep-time]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The length of time to sleep, in seconds, for every event.
|
||||
|
||||
This can be a number (eg, 0.5), or a string (eg, `%{{foo}}`) The second form (string with a field value) is useful if you have an attribute of your event that you want to use to indicate the amount of time to sleep.
|
||||
|
||||
Example:
|
||||
|
||||
```ruby
|
||||
filter {
|
||||
sleep {
|
||||
# Sleep 1 second for every event.
|
||||
time => "1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-sleep-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-sleep-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-sleep-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-sleep-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-sleep-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-sleep-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-sleep-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-sleep-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-sleep-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
sleep {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
sleep {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-sleep-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
sleep {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
sleep {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-sleep-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-sleep-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 sleep filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
sleep {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-sleep-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-sleep-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
sleep {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
sleep {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-sleep-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
sleep {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
sleep {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,261 +0,0 @@
|
|||
---
|
||||
navigation_title: "split"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-split.html
|
||||
---
|
||||
|
||||
# Split filter plugin [plugins-filters-split]
|
||||
|
||||
|
||||
* Plugin version: v3.1.8
|
||||
* Released on: 2020-01-21
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-split/blob/v3.1.8/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-split-index.md).
|
||||
|
||||
## Getting help [_getting_help_160]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-split). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_159]
|
||||
|
||||
The split filter clones an event by splitting one of its fields and placing each value resulting from the split into a clone of the original event. The field being split can either be a string or an array.
|
||||
|
||||
An example use case of this filter is for taking output from the [exec input plugin](/reference/plugins-inputs-exec.md) which emits one event for the whole output of a command and splitting that output by newline - making each line an event.
|
||||
|
||||
Split filter can also be used to split array fields in events into individual events. A very common pattern in JSON & XML is to make use of lists to group data together.
|
||||
|
||||
For example, a json structure like this:
|
||||
|
||||
```js
|
||||
{ field1: ...,
|
||||
results: [
|
||||
{ result ... },
|
||||
{ result ... },
|
||||
{ result ... },
|
||||
...
|
||||
] }
|
||||
```
|
||||
|
||||
The split filter can be used on the above data to create separate events for each value of `results` field
|
||||
|
||||
```js
|
||||
filter {
|
||||
split {
|
||||
field => "results"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The end result of each split is a complete copy of the event with only the current split section of the given field changed.
|
||||
|
||||
|
||||
## Split Filter Configuration Options [plugins-filters-split-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-split-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`field`](#plugins-filters-split-field) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`target`](#plugins-filters-split-target) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`terminator`](#plugins-filters-split-terminator) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-split-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `field` [plugins-filters-split-field]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"message"`
|
||||
|
||||
The field which value is split by the terminator. Can be a multiline message or the ID of an array. Nested arrays are referenced like: "[object_id][array_id]"
|
||||
|
||||
|
||||
### `target` [plugins-filters-split-target]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
The field within the new event which the value is split into. If not set, the target field defaults to split field name.
|
||||
|
||||
|
||||
### `terminator` [plugins-filters-split-terminator]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value is `"\n"`
|
||||
|
||||
The string to split on. This is usually a line terminator, but can be any string. If you are splitting a JSON array into multiple events, you can ignore this field.
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-split-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-split-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-split-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-split-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-split-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-split-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-split-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-split-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-split-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
split {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
split {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-split-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
split {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
split {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-split-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-split-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 split filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
split {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-split-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-split-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
split {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
split {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-split-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
split {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
split {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,266 +0,0 @@
|
|||
---
|
||||
navigation_title: "syslog_pri"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-syslog_pri.html
|
||||
---
|
||||
|
||||
# Syslog_pri filter plugin [plugins-filters-syslog_pri]
|
||||
|
||||
|
||||
* Plugin version: v3.2.1
|
||||
* Released on: 2024-01-17
|
||||
* [Changelog](https://github.com/logstash-plugins/logstash-filter-syslog_pri/blob/v3.2.1/CHANGELOG.md)
|
||||
|
||||
For other versions, see the [Versioned plugin docs](logstash-docs://reference/filter-syslog_pri-index.md).
|
||||
|
||||
## Getting help [_getting_help_161]
|
||||
|
||||
For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/logstash-plugins/logstash-filter-syslog_pri). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#logstash_plugins).
|
||||
|
||||
|
||||
## Description [_description_160]
|
||||
|
||||
Filter plugin for logstash to parse the `PRI` field from the front of a Syslog (RFC3164) message. If no priority is set, it will default to 13 (per RFC).
|
||||
|
||||
This filter is based on the original `syslog.rb` code shipped with logstash.
|
||||
|
||||
|
||||
## Syslog_pri Filter Configuration Options [plugins-filters-syslog_pri-options]
|
||||
|
||||
This plugin supports the following configuration options plus the [Common options](#plugins-filters-syslog_pri-common-options) described later.
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`ecs_compatibility`](#plugins-filters-syslog_pri-ecs_compatibility) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`facility_labels`](#plugins-filters-syslog_pri-facility_labels) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`severity_labels`](#plugins-filters-syslog_pri-severity_labels) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`syslog_pri_field_name`](#plugins-filters-syslog_pri-syslog_pri_field_name) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`use_labels`](#plugins-filters-syslog_pri-use_labels) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
|
||||
Also see [Common options](#plugins-filters-syslog_pri-common-options) for a list of options supported by all filter plugins.
|
||||
|
||||
|
||||
|
||||
### `ecs_compatibility` [plugins-filters-syslog_pri-ecs_compatibility]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Supported values are:
|
||||
|
||||
* `disabled`: does not use ECS-compatible field names (for example, `syslog_severity_code` for syslog severity)
|
||||
* `v1`, `v8`: uses fields that are compatible with Elastic Common Schema (for example, `[log][syslog][severity][code]`)
|
||||
|
||||
* Default value depends on which version of Logstash is running:
|
||||
|
||||
* When Logstash provides a `pipeline.ecs_compatibility` setting, its value is used as the default
|
||||
* Otherwise, the default value is `disabled`.
|
||||
|
||||
|
||||
Controls this plugin’s compatibility with the [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)). The value of this setting affects the *default* value of [`syslog_pri_field_name`](#plugins-filters-syslog_pri-syslog_pri_field_name).
|
||||
|
||||
|
||||
### `facility_labels` [plugins-filters-syslog_pri-facility_labels]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["kernel", "user-level", "mail", "daemon", "security/authorization", "syslogd", "line printer", "network news", "uucp", "clock", "security/authorization", "ftp", "ntp", "log audit", "log alert", "clock", "local0", "local1", "local2", "local3", "local4", "local5", "local6", "local7"]`
|
||||
|
||||
Labels for facility levels. This comes from RFC3164. If an unrecognized facility code is provided and [`use_labels`](#plugins-filters-syslog_pri-use_labels) is `true` then the event is tagged with `_syslogpriparsefailure`.
|
||||
|
||||
|
||||
### `severity_labels` [plugins-filters-syslog_pri-severity_labels]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `["emergency", "alert", "critical", "error", "warning", "notice", "informational", "debug"]`
|
||||
|
||||
Labels for severity levels. This comes from RFC3164.
|
||||
|
||||
|
||||
### `syslog_pri_field_name` [plugins-filters-syslog_pri-syslog_pri_field_name]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* Default value depends on whether [`ecs_compatibility`](#plugins-filters-syslog_pri-ecs_compatibility) is enabled:
|
||||
|
||||
* ECS Compatibility disabled: `"syslog_pri"`
|
||||
* ECS Compatibility enabled: `"[log][syslog][priority]"`
|
||||
|
||||
|
||||
Name of field which passes in the extracted PRI part of the syslog message
|
||||
|
||||
|
||||
### `use_labels` [plugins-filters-syslog_pri-use_labels]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Add human-readable names after parsing severity and facility from PRI
|
||||
|
||||
|
||||
|
||||
## Common options [plugins-filters-syslog_pri-common-options]
|
||||
|
||||
These configuration options are supported by all filter plugins:
|
||||
|
||||
| Setting | Input type | Required |
|
||||
| --- | --- | --- |
|
||||
| [`add_field`](#plugins-filters-syslog_pri-add_field) | [hash](/reference/configuration-file-structure.md#hash) | No |
|
||||
| [`add_tag`](#plugins-filters-syslog_pri-add_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`enable_metric`](#plugins-filters-syslog_pri-enable_metric) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`id`](#plugins-filters-syslog_pri-id) | [string](/reference/configuration-file-structure.md#string) | No |
|
||||
| [`periodic_flush`](#plugins-filters-syslog_pri-periodic_flush) | [boolean](/reference/configuration-file-structure.md#boolean) | No |
|
||||
| [`remove_field`](#plugins-filters-syslog_pri-remove_field) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
| [`remove_tag`](#plugins-filters-syslog_pri-remove_tag) | [array](/reference/configuration-file-structure.md#array) | No |
|
||||
|
||||
### `add_field` [plugins-filters-syslog_pri-add_field]
|
||||
|
||||
* Value type is [hash](/reference/configuration-file-structure.md#hash)
|
||||
* Default value is `{}`
|
||||
|
||||
If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{{field}}`.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
syslog_pri {
|
||||
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple fields at once:
|
||||
filter {
|
||||
syslog_pri {
|
||||
add_field => {
|
||||
"foo_%{somefield}" => "Hello world, from %{host}"
|
||||
"new_field" => "new_static_value"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{{host}}` piece replaced with that value from the event. The second example would also add a hardcoded field.
|
||||
|
||||
|
||||
### `add_tag` [plugins-filters-syslog_pri-add_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
syslog_pri {
|
||||
add_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also add multiple tags at once:
|
||||
filter {
|
||||
syslog_pri {
|
||||
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag).
|
||||
|
||||
|
||||
### `enable_metric` [plugins-filters-syslog_pri-enable_metric]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `true`
|
||||
|
||||
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
|
||||
|
||||
|
||||
### `id` [plugins-filters-syslog_pri-id]
|
||||
|
||||
* Value type is [string](/reference/configuration-file-structure.md#string)
|
||||
* There is no default value for this setting.
|
||||
|
||||
Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 syslog_pri filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
|
||||
|
||||
```json
|
||||
filter {
|
||||
syslog_pri {
|
||||
id => "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::::{note}
|
||||
Variable substitution in the `id` field only supports environment variables and does not support the use of values from the secret store.
|
||||
::::
|
||||
|
||||
|
||||
|
||||
### `periodic_flush` [plugins-filters-syslog_pri-periodic_flush]
|
||||
|
||||
* Value type is [boolean](/reference/configuration-file-structure.md#boolean)
|
||||
* Default value is `false`
|
||||
|
||||
Call the filter flush method at regular interval. Optional.
|
||||
|
||||
|
||||
### `remove_field` [plugins-filters-syslog_pri-remove_field]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the `%{{field}}` Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
syslog_pri {
|
||||
remove_field => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple fields at once:
|
||||
filter {
|
||||
syslog_pri {
|
||||
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field.
|
||||
|
||||
|
||||
### `remove_tag` [plugins-filters-syslog_pri-remove_tag]
|
||||
|
||||
* Value type is [array](/reference/configuration-file-structure.md#array)
|
||||
* Default value is `[]`
|
||||
|
||||
If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{{field}}` syntax.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
filter {
|
||||
syslog_pri {
|
||||
remove_tag => [ "foo_%{somefield}" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
# You can also remove multiple tags at once:
|
||||
filter {
|
||||
syslog_pri {
|
||||
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.
|
||||
|
||||
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
---
|
||||
navigation_title: "threats_classifier"
|
||||
mapped_pages:
|
||||
- https://www.elastic.co/guide/en/logstash/current/plugins-filters-threats_classifier.html
|
||||
---
|
||||
|
||||
# Threats_classifier filter plugin [plugins-filters-threats_classifier]
|
||||
|
||||
|
||||
* This plugin was created and is maintained by a partner.
|
||||
* [Change log](https://github.com/empow/logstash-filter-empow-classifier/blob/master/CHANGELOG.md)
|
||||
|
||||
## Installation [_installation_65]
|
||||
|
||||
For plugins not bundled by default, it is easy to install by running `bin/logstash-plugin install logstash-filter-threats_classifier`. See [Working with plugins](/reference/working-with-plugins.md) for more details.
|
||||
|
||||
|
||||
## Description [_description_161]
|
||||
|
||||
This plugin uses the cyber-kill-chain and MITRE representation language to enrich security logs with information about the attacker’s intent—what the attacker is trying to achieve, who they are targeting, and how they plan to carry out the attack.
|
||||
|
||||
|
||||
## Documentation [_documentation_3]
|
||||
|
||||
Documentation for the [filter-threats_classifier plugin](https://github.com/empow/logstash-filter-empow-classifier/blob/master/README.md) is maintained by the creators.
|
||||
|
||||
|
||||
## Getting Help [_getting_help_162]
|
||||
|
||||
This is a third-party plugin. For bugs or feature requests, open an issue in the [plugins-filters-threats_classifier Github repo](https://github.com/empow/logstash-filter-empow-classifier).
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue