Doc: Fix external links (#17288)

This commit is contained in:
Colleen McGinnis 2025-03-06 12:38:31 -06:00 committed by GitHub
parent feb2b92ba2
commit cb6886814c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
31 changed files with 63 additions and 63 deletions

View file

@ -305,7 +305,7 @@ Deprecations are noted in the `logstash-deprecation.log` file in the `log` direc
Gemfiles allow Rubys Bundler to maintain the dependencies for your plugin. Currently, all well need is the Logstash gem, for testing, but if you require other gems, you should add them in here. Gemfiles allow Rubys Bundler to maintain the dependencies for your plugin. Currently, all well need is the Logstash gem, for testing, but if you require other gems, you should add them in here.
::::{tip} ::::{tip}
See [Bundlers Gemfile page](http://bundler.io/gemfile.md) for more details. See [Bundlers Gemfile page](http://bundler.io/gemfile.html) for more details.
:::: ::::

View file

@ -306,7 +306,7 @@ Deprecations are noted in the `logstash-deprecation.log` file in the `log` direc
Gemfiles allow Rubys Bundler to maintain the dependencies for your plugin. Currently, all well need is the Logstash gem, for testing, but if you require other gems, you should add them in here. Gemfiles allow Rubys Bundler to maintain the dependencies for your plugin. Currently, all well need is the Logstash gem, for testing, but if you require other gems, you should add them in here.
::::{tip} ::::{tip}
See [Bundlers Gemfile page](http://bundler.io/gemfile.md) for more details. See [Bundlers Gemfile page](http://bundler.io/gemfile.html) for more details.
:::: ::::

View file

@ -346,7 +346,7 @@ Deprecations are noted in the `logstash-deprecation.log` file in the `log` direc
Gemfiles allow Rubys Bundler to maintain the dependencies for your plugin. Currently, all well need is the Logstash gem, for testing, but if you require other gems, you should add them in here. Gemfiles allow Rubys Bundler to maintain the dependencies for your plugin. Currently, all well need is the Logstash gem, for testing, but if you require other gems, you should add them in here.
::::{tip} ::::{tip}
See [Bundlers Gemfile page](http://bundler.io/gemfile.md) for more details. See [Bundlers Gemfile page](http://bundler.io/gemfile.html) for more details.
:::: ::::

View file

@ -263,7 +263,7 @@ Deprecations are noted in the `logstash-deprecation.log` file in the `log` direc
Gemfiles allow Rubys Bundler to maintain the dependencies for your plugin. Currently, all well need is the Logstash gem, for testing, but if you require other gems, you should add them in here. Gemfiles allow Rubys Bundler to maintain the dependencies for your plugin. Currently, all well need is the Logstash gem, for testing, but if you require other gems, you should add them in here.
::::{tip} ::::{tip}
See [Bundlers Gemfile page](http://bundler.io/gemfile.md) for more details. See [Bundlers Gemfile page](http://bundler.io/gemfile.html) for more details.
:::: ::::

View file

@ -59,7 +59,7 @@ output {
Similarly, you can convert the UTC timestamp in the `@timestamp` field into a string. Similarly, you can convert the UTC timestamp in the `@timestamp` field into a string.
Instead of specifying a field name inside the curly braces, use the `%{{FORMAT}}` syntax where `FORMAT` is a [java time format](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/format/DateTimeFormatter.md#patterns). Instead of specifying a field name inside the curly braces, use the `%{{FORMAT}}` syntax where `FORMAT` is a [java time format](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/format/DateTimeFormatter.html#patterns).
For example, if you want to use the file output to write logs based on the events UTC date and hour and the `type` field: For example, if you want to use the file output to write logs based on the events UTC date and hour and the `type` field:
@ -72,7 +72,7 @@ output {
``` ```
::::{note} ::::{note}
The sprintf format continues to support [deprecated joda time format](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.md) strings as well using the `%{+FORMAT}` syntax. These formats are not directly interchangeable, and we advise you to begin using the more modern Java Time format. The sprintf format continues to support [deprecated joda time format](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html) strings as well using the `%{+FORMAT}` syntax. These formats are not directly interchangeable, and we advise you to begin using the more modern Java Time format.
:::: ::::

View file

@ -23,7 +23,7 @@ This section includes the following topics:
* Java 17 (default). Check out [Using JDK 17](#jdk17-upgrade) for settings info. * Java 17 (default). Check out [Using JDK 17](#jdk17-upgrade) for settings info.
* Java 21 * Java 21
Use the [official Oracle distribution](http://www.oracle.com/technetwork/java/javase/downloads/index.md) or an open-source distribution, such as [OpenJDK](http://openjdk.java.net/). See the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_jvm) for the official word on supported versions across releases. Use the [official Oracle distribution](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or an open-source distribution, such as [OpenJDK](http://openjdk.java.net/). See the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_jvm) for the official word on supported versions across releases.
::::{admonition} Bundled JDK ::::{admonition} Bundled JDK
:class: note :class: note

View file

@ -18,7 +18,7 @@ You can configure logging using the `log4j2.properties` file or the Logstash API
## Log4j2 configuration [log4j2] ## Log4j2 configuration [log4j2]
Logstash ships with a `log4j2.properties` file with out-of-the-box settings, including logging to console. You can modify this file to change the rotation policy, type, and other [log4j2 configuration](https://logging.apache.org/log4j/2.x/manual/configuration.md#Loggers). Logstash ships with a `log4j2.properties` file with out-of-the-box settings, including logging to console. You can modify this file to change the rotation policy, type, and other [log4j2 configuration](https://logging.apache.org/log4j/2.x/manual/configuration.html#Loggers).
You must restart Logstash to apply any changes that you make to this file. Changes to `log4j2.properties` persist after Logstash is restarted. You must restart Logstash to apply any changes that you make to this file. Changes to `log4j2.properties` persist after Logstash is restarted.

View file

@ -373,7 +373,7 @@ Available variables are:
`event`: current Logstash event `event`: current Logstash event
`map`: aggregated map associated to `task_id`, containing key/value pairs. Data structure is a ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.md) `map`: aggregated map associated to `task_id`, containing key/value pairs. Data structure is a ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.html)
`map_meta`: meta informations associated to aggregate map. It allows to set a custom `timeout` or `inactivity_timeout`. It allows also to get `creation_timestamp`, `lastevent_timestamp` and `task_id`. `map_meta`: meta informations associated to aggregate map. It allows to set a custom `timeout` or `inactivity_timeout`. It allows also to get `creation_timestamp`, `lastevent_timestamp` and `task_id`.
@ -406,7 +406,7 @@ To create additional events during the code execution, to be emitted immediately
} }
``` ```
The parameter of the function `new_event_block.call` must be of type `LogStash::Event`. To create such an object, the constructor of the same class can be used: `LogStash::Event.new()`. `LogStash::Event.new()` can receive a parameter of type ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.md) to initialize the new event fields. The parameter of the function `new_event_block.call` must be of type `LogStash::Event`. To create such an object, the constructor of the same class can be used: `LogStash::Event.new()`. `LogStash::Event.new()` can receive a parameter of type ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.html) to initialize the new event fields.
### `end_of_task` [plugins-filters-aggregate-end_of_task] ### `end_of_task` [plugins-filters-aggregate-end_of_task]

View file

@ -193,7 +193,7 @@ Z
: Timezone offset structured as HH:mm (colon in between hour and minute offsets). Example: `-07:00`. : Timezone offset structured as HH:mm (colon in between hour and minute offsets). Example: `-07:00`.
ZZZ ZZZ
: Timezone identity. Example: `America/Los_Angeles`. Note: Valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.md). : Timezone identity. Example: `America/Los_Angeles`. Note: Valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.html).
z z
@ -227,7 +227,7 @@ E
For non-formatting syntax, youll need to put single-quote characters around the value. For example, if you were parsing ISO8601 time, "2015-01-01T01:12:23" that little "T" isnt a valid time format, and you want to say "literally, a T", your format would be this: "yyyy-MM-ddTHH:mm:ss" For non-formatting syntax, youll need to put single-quote characters around the value. For example, if you were parsing ISO8601 time, "2015-01-01T01:12:23" that little "T" isnt a valid time format, and you want to say "literally, a T", your format would be this: "yyyy-MM-ddTHH:mm:ss"
Other less common date units, such as era (G), century (C), am/pm (a), and # more, can be learned about on the [joda-time documentation](http://www.joda.org/joda-time/key_format.md). Other less common date units, such as era (G), century (C), am/pm (a), and # more, can be learned about on the [joda-time documentation](http://www.joda.org/joda-time/key_format.html).
### `tag_on_failure` [plugins-filters-date-tag_on_failure] ### `tag_on_failure` [plugins-filters-date-tag_on_failure]
@ -251,7 +251,7 @@ Store the matching timestamp into the given target field. If not provided, defa
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting. * There is no default value for this setting.
Specify a time zone canonical ID to be used for date parsing. The valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.md). This is useful in case the time zone cannot be extracted from the value, and is not the platform default. If this is not specified the platform default will be used. Canonical ID is good as it takes care of daylight saving time for you For example, `America/Los_Angeles` or `Europe/Paris` are valid IDs. This field can be dynamic and include parts of the event using the `%{{field}}` syntax Specify a time zone canonical ID to be used for date parsing. The valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.html). This is useful in case the time zone cannot be extracted from the value, and is not the platform default. If this is not specified the platform default will be used. Canonical ID is good as it takes care of daylight saving time for you For example, `America/Los_Angeles` or `Europe/Paris` are valid IDs. This field can be dynamic and include parts of the event using the `%{{field}}` syntax

View file

@ -447,7 +447,7 @@ name
: The name of the table to be created in the database. : The name of the table to be created in the database.
columns columns
: An array of column specifications. Each column specification is an array of exactly two elements, for example `["ip", "varchar(15)"]`. The first element is the column name string. The second element is a string that is an [Apache Derby SQL type](https://db.apache.org/derby/docs/10.14/ref/crefsqlj31068.md). The string content is checked when the local lookup tables are built, not when the settings are validated. Therefore, any misspelled SQL type strings result in errors. : An array of column specifications. Each column specification is an array of exactly two elements, for example `["ip", "varchar(15)"]`. The first element is the column name string. The second element is a string that is an [Apache Derby SQL type](https://db.apache.org/derby/docs/10.14/ref/crefsqlj31068.html). The string content is checked when the local lookup tables are built, not when the settings are validated. Therefore, any misspelled SQL type strings result in errors.
index_columns index_columns
: An array of strings. Each string must be defined in the `columns` setting. The index name will be generated internally. Unique or sorted indexes are not supported. : An array of strings. Each string must be defined in the `columns` setting. The index name will be generated internally. Unique or sorted indexes are not supported.

View file

@ -214,7 +214,7 @@ The default, `900`, means check every 15 minutes. Setting this value too low (ge
* Value type is [array](/reference/configuration-file-structure.md#array) * Value type is [array](/reference/configuration-file-structure.md#array)
* Default value is `["CPUUtilization", "DiskReadOps", "DiskWriteOps", "NetworkIn", "NetworkOut"]` * Default value is `["CPUUtilization", "DiskReadOps", "DiskWriteOps", "NetworkIn", "NetworkOut"]`
Specify the metrics to fetch for the namespace. The defaults are AWS/EC2 specific. See [http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.md) for the available metrics for other namespaces. Specify the metrics to fetch for the namespace. The defaults are AWS/EC2 specific. See [http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html) for the available metrics for other namespaces.
### `namespace` [plugins-inputs-cloudwatch-namespace] ### `namespace` [plugins-inputs-cloudwatch-namespace]
@ -224,7 +224,7 @@ Specify the metrics to fetch for the namespace. The defaults are AWS/EC2 specifi
If undefined, LogStash will complain, even if codec is unused. The service namespace of the metrics to fetch. If undefined, LogStash will complain, even if codec is unused. The service namespace of the metrics to fetch.
The default is for the EC2 service. See [http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.md) for valid values. The default is for the EC2 service. See [http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html) for valid values.
### `period` [plugins-inputs-cloudwatch-period] ### `period` [plugins-inputs-cloudwatch-period]
@ -258,7 +258,7 @@ The AWS Region
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting. * There is no default value for this setting.
The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.md) for more information. The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) for more information.
### `role_session_name` [plugins-inputs-cloudwatch-role_session_name] ### `role_session_name` [plugins-inputs-cloudwatch-role_session_name]

View file

@ -20,7 +20,7 @@ For questions about the plugin, open a topic in the [Discuss](http://discuss.ela
## Description [_description_12] ## Description [_description_12]
This CouchDB input allows you to automatically stream events from the CouchDB [_changes](http://guide.couchdb.org/draft/notifications.md) URI. Moreover, any "future" changes will automatically be streamed as well making it easy to synchronize your CouchDB data with any target destination This CouchDB input allows you to automatically stream events from the CouchDB [_changes](http://guide.couchdb.org/draft/notifications.html) URI. Moreover, any "future" changes will automatically be streamed as well making it easy to synchronize your CouchDB data with any target destination
### Upsert and delete [_upsert_and_delete] ### Upsert and delete [_upsert_and_delete]

View file

@ -22,7 +22,7 @@ For questions about the plugin, open a topic in the [Discuss](http://discuss.ela
Read events from a Jms Broker. Supports both Jms Queues and Topics. Read events from a Jms Broker. Supports both Jms Queues and Topics.
For more information about Jms, see [https://javaee.github.io/tutorial/jms-concepts.html](https://javaee.github.io/tutorial/jms-concepts.md). For more information about the Ruby Gem used, see [http://github.com/reidmorrison/jruby-jms](http://github.com/reidmorrison/jruby-jms). For more information about Jms, see [https://javaee.github.io/tutorial/jms-concepts.html](https://javaee.github.io/tutorial/jms-concepts.html). For more information about the Ruby Gem used, see [http://github.com/reidmorrison/jruby-jms](http://github.com/reidmorrison/jruby-jms).
JMS configurations can be done either entirely in the Logstash configuration file, or in a mixture of the Logstash configuration file, and a specified yaml file. Simple configurations that do not need to make calls to implementation specific methods on the connection factory can be specified entirely in the Logstash configuration, whereas more complex configurations, should also use the combination of yaml file and Logstash configuration. JMS configurations can be done either entirely in the Logstash configuration file, or in a mixture of the Logstash configuration file, and a specified yaml file. Simple configurations that do not need to make calls to implementation specific methods on the connection factory can be specified entirely in the Logstash configuration, whereas more complex configurations, should also use the combination of yaml file and Logstash configuration.

View file

@ -48,9 +48,9 @@ Logstash instances by default form a single logical group to subscribe to Kafka
Ideally you should have as many threads as the number of partitions for a perfect balancemore threads than partitions means that some threads will be idle Ideally you should have as many threads as the number of partitions for a perfect balancemore threads than partitions means that some threads will be idle
For more information see [https://kafka.apache.org/38/documentation.html#theconsumer](https://kafka.apache.org/38/documentation.md#theconsumer) For more information see [https://kafka.apache.org/38/documentation.html#theconsumer](https://kafka.apache.org/38/documentation.html#theconsumer)
Kafka consumer configuration: [https://kafka.apache.org/38/documentation.html#consumerconfigs](https://kafka.apache.org/38/documentation.md#consumerconfigs) Kafka consumer configuration: [https://kafka.apache.org/38/documentation.html#consumerconfigs](https://kafka.apache.org/38/documentation.html#consumerconfigs)
## Metadata fields [_metadata_fields] ## Metadata fields [_metadata_fields]
@ -62,7 +62,7 @@ The following metadata from Kafka broker are added under the `[@metadata]` field
* `[@metadata][kafka][partition]`: Partition info for this message. * `[@metadata][kafka][partition]`: Partition info for this message.
* `[@metadata][kafka][offset]`: Original record offset for this message. * `[@metadata][kafka][offset]`: Original record offset for this message.
* `[@metadata][kafka][key]`: Record key, if any. * `[@metadata][kafka][key]`: Record key, if any.
* `[@metadata][kafka][timestamp]`: Timestamp in the Record. Depending on your broker configuration, this can be either when the record was created (default) or when it was received by the broker. See more about property log.message.timestamp.type at [https://kafka.apache.org/38/documentation.html#brokerconfigs](https://kafka.apache.org/38/documentation.md#brokerconfigs) * `[@metadata][kafka][timestamp]`: Timestamp in the Record. Depending on your broker configuration, this can be either when the record was created (default) or when it was received by the broker. See more about property log.message.timestamp.type at [https://kafka.apache.org/38/documentation.html#brokerconfigs](https://kafka.apache.org/38/documentation.html#brokerconfigs)
Metadata is only added to the event if the `decorate_events` option is set to `basic` or `extended` (it defaults to `none`). Metadata is only added to the event if the `decorate_events` option is set to `basic` or `extended` (it defaults to `none`).
@ -384,7 +384,7 @@ Please note that specifying `jaas_path` and `kerberos_config` in the config file
* Value type is [path](/reference/configuration-file-structure.md#path) * Value type is [path](/reference/configuration-file-structure.md#path)
* There is no default value for this setting. * There is no default value for this setting.
Optional path to kerberos config file. This is krb5.conf style as detailed in [https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.md) Optional path to kerberos config file. This is krb5.conf style as detailed in [https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html)
### `key_deserializer_class` [plugins-inputs-kafka-key_deserializer_class] ### `key_deserializer_class` [plugins-inputs-kafka-key_deserializer_class]
@ -439,7 +439,7 @@ The name of the partition assignment strategy that the client uses to distribute
* `sticky` * `sticky`
* `cooperative_sticky` * `cooperative_sticky`
These map to Kafkas corresponding [`ConsumerPartitionAssignor`](https://kafka.apache.org/38/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.md) implementations. These map to Kafkas corresponding [`ConsumerPartitionAssignor`](https://kafka.apache.org/38/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.html) implementations.
### `poll_timeout_ms` [plugins-inputs-kafka-poll_timeout_ms] ### `poll_timeout_ms` [plugins-inputs-kafka-poll_timeout_ms]
@ -581,7 +581,7 @@ The Kerberos principal name that Kafka broker runs as. This can be defined eithe
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* Default value is `"GSSAPI"` * Default value is `"GSSAPI"`
[SASL mechanism](http://kafka.apache.org/documentation.md#security_sasl) used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. [SASL mechanism](http://kafka.apache.org/documentation.html#security_sasl) used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.
### `schema_registry_key` [plugins-inputs-kafka-schema_registry_key] ### `schema_registry_key` [plugins-inputs-kafka-schema_registry_key]

View file

@ -25,7 +25,7 @@ For questions about the plugin, open a topic in the [Discuss](http://discuss.ela
## Description [_description_36] ## Description [_description_36]
You can use this plugin to receive events through [AWS Kinesis](http://docs.aws.amazon.com/kinesis/latest/dev/introduction.md). This plugin uses the [Java Kinesis Client Library](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-java.md). The documentation at [https://github.com/awslabs/amazon-kinesis-client](https://github.com/awslabs/amazon-kinesis-client) will be useful. You can use this plugin to receive events through [AWS Kinesis](http://docs.aws.amazon.com/kinesis/latest/dev/introduction.html). This plugin uses the [Java Kinesis Client Library](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-java.html). The documentation at [https://github.com/awslabs/amazon-kinesis-client](https://github.com/awslabs/amazon-kinesis-client) will be useful.
AWS credentials can be specified either through environment variables, or an IAM instance role. The library uses a DynamoDB table for worker coordination, so youll need to grant access to that as well as to the Kinesis stream. The DynamoDB table has the same name as the `application_name` configuration option, which defaults to "logstash". AWS credentials can be specified either through environment variables, or an IAM instance role. The library uses a DynamoDB table for worker coordination, so youll need to grant access to that as well as to the Kinesis stream. The DynamoDB table has the same name as the `application_name` configuration option, which defaults to "logstash".
@ -51,7 +51,7 @@ If you want to read a CloudWatch Logs subscription stream, youll also need to
## Authentication [plugins-inputs-kinesis-authentication] ## Authentication [plugins-inputs-kinesis-authentication]
This plugin uses the default AWS SDK auth chain, [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.md), to determine which credentials the client will use, unless `profile` is set, in which case [ProfileCredentialsProvider](http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/profile/ProfileCredentialsProvider.md) is used. This plugin uses the default AWS SDK auth chain, [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html), to determine which credentials the client will use, unless `profile` is set, in which case [ProfileCredentialsProvider](http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/profile/ProfileCredentialsProvider.html) is used.
The default chain reads the credentials in this order: The default chain reads the credentials in this order:
@ -65,7 +65,7 @@ The credentials need access to the following services:
* AWS DynamoDB. The client library stores information for worker coordination in DynamoDB (offsets and active worker per partition) * AWS DynamoDB. The client library stores information for worker coordination in DynamoDB (offsets and active worker per partition)
* AWS CloudWatch. If the metrics are enabled the credentials need CloudWatch update permissions granted. * AWS CloudWatch. If the metrics are enabled the credentials need CloudWatch update permissions granted.
See the [AWS documentation](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.md) for more information on the default chain. See the [AWS documentation](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html) for more information on the default chain.
## Kinesis Input Configuration Options [plugins-inputs-kinesis-options] ## Kinesis Input Configuration Options [plugins-inputs-kinesis-options]

View file

@ -25,7 +25,7 @@ Pull events from a [RabbitMQ](http://www.rabbitmq.com/) queue.
The default settings will create an entirely transient queue and listen for all messages by default. If you need durability or any other advanced settings, please set the appropriate options The default settings will create an entirely transient queue and listen for all messages by default. If you need durability or any other advanced settings, please set the appropriate options
This plugin uses the [March Hare](http://rubymarchhare.info/) library for interacting with the RabbitMQ server. Most configuration options map directly to standard RabbitMQ and AMQP concepts. The [AMQP 0-9-1 reference guide](https://www.rabbitmq.com/amqp-0-9-1-reference.md) and other parts of the RabbitMQ documentation are useful for deeper understanding. This plugin uses the [March Hare](http://rubymarchhare.info/) library for interacting with the RabbitMQ server. Most configuration options map directly to standard RabbitMQ and AMQP concepts. The [AMQP 0-9-1 reference guide](https://www.rabbitmq.com/amqp-0-9-1-reference.html) and other parts of the RabbitMQ documentation are useful for deeper understanding.
The properties of messages received will be stored in the `[@metadata][rabbitmq_properties]` field if the `@metadata_enabled` setting is enabled. Note that storing metadata may degrade performance. The following properties may be available (in most cases dependent on whether they were set by the sender): The properties of messages received will be stored in the `[@metadata][rabbitmq_properties]` field if the `@metadata_enabled` setting is enabled. Note that storing metadata may degrade performance. The following properties may be available (in most cases dependent on whether they were set by the sender):
@ -119,9 +119,9 @@ Optional queue arguments as an array.
Relevant RabbitMQ doc guides: Relevant RabbitMQ doc guides:
* [Optional queue arguments](https://www.rabbitmq.com/queues.md#optional-arguments) * [Optional queue arguments](https://www.rabbitmq.com/queues.html#optional-arguments)
* [Policies](https://www.rabbitmq.com/parameters.md#policies) * [Policies](https://www.rabbitmq.com/parameters.html#policies)
* [Quorum Queues](https://www.rabbitmq.com/quorum-queues.md) * [Quorum Queues](https://www.rabbitmq.com/quorum-queues.html)
### `auto_delete` [plugins-inputs-rabbitmq-auto_delete] ### `auto_delete` [plugins-inputs-rabbitmq-auto_delete]
@ -137,7 +137,7 @@ Should the queue be deleted on the broker when the last consumer disconnects? Se
* Value type is [boolean](/reference/configuration-file-structure.md#boolean) * Value type is [boolean](/reference/configuration-file-structure.md#boolean)
* Default value is `true` * Default value is `true`
Set this to [automatically recover](https://www.rabbitmq.com/connections.md#automatic-recovery) from a broken connection. You almost certainly dont want to override this! Set this to [automatically recover](https://www.rabbitmq.com/connections.html#automatic-recovery) from a broken connection. You almost certainly dont want to override this!
### `connect_retry_interval` [plugins-inputs-rabbitmq-connect_retry_interval] ### `connect_retry_interval` [plugins-inputs-rabbitmq-connect_retry_interval]
@ -193,7 +193,7 @@ Is the queue exclusive? Exclusive queues can only be used by the connection that
* Value type is [number](/reference/configuration-file-structure.md#number) * Value type is [number](/reference/configuration-file-structure.md#number)
* There is no default value for this setting. * There is no default value for this setting.
[Heartbeat timeout](https://www.rabbitmq.com/heartbeats.md) in seconds. If unspecified then heartbeat timeout of 60 seconds will be used. [Heartbeat timeout](https://www.rabbitmq.com/heartbeats.html) in seconds. If unspecified then heartbeat timeout of 60 seconds will be used.
### `host` [plugins-inputs-rabbitmq-host] ### `host` [plugins-inputs-rabbitmq-host]

View file

@ -27,7 +27,7 @@ For questions about the plugin, open a topic in the [Discuss](http://discuss.ela
Read RELP events over a TCP socket. Read RELP events over a TCP socket.
For more information about RELP, see [http://www.rsyslog.com/doc/imrelp.html](http://www.rsyslog.com/doc/imrelp.md) For more information about RELP, see [http://www.rsyslog.com/doc/imrelp.html](http://www.rsyslog.com/doc/imrelp.html)
This protocol implements application-level acknowledgements to help protect against message loss. This protocol implements application-level acknowledgements to help protect against message loss.

View file

@ -100,7 +100,7 @@ This plugin uses the AWS SDK and supports several ways to get credentials, which
* Value type is [hash](/reference/configuration-file-structure.md#hash) * Value type is [hash](/reference/configuration-file-structure.md#hash)
* Default value is `{}` * Default value is `{}`
Key-value pairs of settings and corresponding values used to parametrize the connection to s3. See full list in [the AWS SDK documentation](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.md). Example: Key-value pairs of settings and corresponding values used to parametrize the connection to s3. See full list in [the AWS SDK documentation](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html). Example:
```ruby ```ruby
input { input {
@ -262,7 +262,7 @@ The AWS Region
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting. * There is no default value for this setting.
The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.md) for more information. The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) for more information.
### `role_session_name` [plugins-inputs-s3-role_session_name] ### `role_session_name` [plugins-inputs-s3-role_session_name]

View file

@ -120,7 +120,7 @@ This plugin uses the AWS SDK and supports several ways to get credentials, which
* Value type is [hash](/reference/configuration-file-structure.md#hash) * Value type is [hash](/reference/configuration-file-structure.md#hash)
* Default value is `{}` * Default value is `{}`
Key-value pairs of settings and corresponding values used to parametrize the connection to SQS. See full list in [the AWS SDK documentation](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/SQS/Client.md). Example: Key-value pairs of settings and corresponding values used to parametrize the connection to SQS. See full list in [the AWS SDK documentation](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/SQS/Client.html). Example:
```ruby ```ruby
input { input {
@ -204,7 +204,7 @@ Name of the SQS Queue name to pull messages from. Note that this is just the nam
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting. * There is no default value for this setting.
ID of the AWS account owning the queue if you want to use a [cross-account queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-examples-of-sqs-policies.md#grant-two-permissions-to-one-account) with embedded policy. Note that AWS SDK only support numerical account ID and not account aliases. ID of the AWS account owning the queue if you want to use a [cross-account queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-examples-of-sqs-policies.html#grant-two-permissions-to-one-account) with embedded policy. Note that AWS SDK only support numerical account ID and not account aliases.
### `region` [plugins-inputs-sqs-region] ### `region` [plugins-inputs-sqs-region]
@ -220,7 +220,7 @@ The AWS Region
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting. * There is no default value for this setting.
The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.md) for more information. The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) for more information.
### `role_session_name` [plugins-inputs-sqs-role_session_name] ### `role_session_name` [plugins-inputs-sqs-role_session_name]

View file

@ -212,7 +212,7 @@ Validate client certificate or certificate chain against these authorities. You
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* Default value includes *all* cipher suites enabled by the JDK and depends on JDK configuration * Default value includes *all* cipher suites enabled by the JDK and depends on JDK configuration
Supported cipher suites vary depending on Java version used, and entries look like `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`. For more information, see Oracles [JDK SunJSSE provider documentation](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.md#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2) and the table of supported [Java cipher suite names](https://docs.oracle.com/en/java/javase/11/docs/specs/security/standard-names.md#jsse-cipher-suite-names). Supported cipher suites vary depending on Java version used, and entries look like `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`. For more information, see Oracles [JDK SunJSSE provider documentation](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2) and the table of supported [Java cipher suite names](https://docs.oracle.com/en/java/javase/11/docs/specs/security/standard-names.html#jsse-cipher-suite-names).
::::{note} ::::{note}
To check the supported cipher suites locally run the following script: `$LS_HOME/bin/ruby -e 'p javax.net.ssl.SSLServerSocketFactory.getDefault.getSupportedCipherSuites'`. To check the supported cipher suites locally run the following script: `$LS_HOME/bin/ruby -e 'p javax.net.ssl.SSLServerSocketFactory.getDefault.getSupportedCipherSuites'`.

View file

@ -135,7 +135,7 @@ The recommended ways to provide the additional path configuration are:
* an environment variable, or * an environment variable, or
* a config file to provide the additional path configuration. * a config file to provide the additional path configuration.
See the "MODULE LOCATIONS" section of the [smi_config documentation](https://www.ibr.cs.tu-bs.de/projects/libsmi/smi_config.md#MODULE%20LOCATIONS) for more information. See the "MODULE LOCATIONS" section of the [smi_config documentation](https://www.ibr.cs.tu-bs.de/projects/libsmi/smi_config.html#MODULE%20LOCATIONS) for more information.
### Option 1: Use an environment variable [plugins-integrations-snmp-env-var] ### Option 1: Use an environment variable [plugins-integrations-snmp-env-var]

View file

@ -47,7 +47,7 @@ At a minimum events must have a "metric name" to be sent to CloudWatch. This can
Other fields which can be added to events to modify the behavior of this plugin are, `CW_namespace`, `CW_unit`, `CW_value`, and `CW_dimensions`. All of these field names are configurable in this output. You can also set per-output defaults for any of them. See below for details. Other fields which can be added to events to modify the behavior of this plugin are, `CW_namespace`, `CW_unit`, `CW_value`, and `CW_dimensions`. All of these field names are configurable in this output. You can also set per-output defaults for any of them. See below for details.
Read more about [AWS CloudWatch](http://aws.amazon.com/cloudwatch/), and the specific of API endpoint this output uses, [PutMetricData](http://docs.amazonwebservices.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.md) Read more about [AWS CloudWatch](http://aws.amazon.com/cloudwatch/), and the specific of API endpoint this output uses, [PutMetricData](http://docs.amazonwebservices.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html)
## Cloudwatch Output Configuration Options [plugins-outputs-cloudwatch-options] ## Cloudwatch Output Configuration Options [plugins-outputs-cloudwatch-options]

View file

@ -59,7 +59,7 @@ If the configured file is deleted, but an event is handled by the plugin, the pl
* Value type is [hash](/reference/configuration-file-structure.md#hash) * Value type is [hash](/reference/configuration-file-structure.md#hash)
* Default value is `{}` * Default value is `{}`
Options for CSV output. This is passed directly to the Ruby stdlib to_csv function. Full documentation is available on the [Ruby CSV documentation page](http://ruby-doc.org/stdlib-2.0.0/libdoc/csv/rdoc/index.md). A typical use case would be to use alternative column or row separators eg: `csv_options => {"col_sep" => "\t" "row_sep" => "\r\n"}` gives tab separated data with windows line endings Options for CSV output. This is passed directly to the Ruby stdlib to_csv function. Full documentation is available on the [Ruby CSV documentation page](http://ruby-doc.org/stdlib-2.0.0/libdoc/csv/rdoc/index.html). A typical use case would be to use alternative column or row separators eg: `csv_options => {"col_sep" => "\t" "row_sep" => "\r\n"}` gives tab separated data with windows line endings
### `dir_mode` [plugins-outputs-csv-dir_mode] ### `dir_mode` [plugins-outputs-csv-dir_mode]

View file

@ -725,7 +725,7 @@ Updating the rollover alias will require the index template to be rewritten.
* ECS Compatibility enabled: `"ecs-logstash-%{+yyyy.MM.dd}"` * ECS Compatibility enabled: `"ecs-logstash-%{+yyyy.MM.dd}"`
The indexing target to write events to. Can point to an [index](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-mgmt.html), [alias](docs-content://manage-data/data-store/aliases.md), or [data stream](docs-content://manage-data/data-store/data-streams.md). This can be dynamic using the `%{{foo}}` syntax. The default value will partition your indices by day so you can more easily delete old data or only search specific date ranges. Indexes may not contain uppercase characters. For weekly indexes ISO 8601 format is recommended, eg. logstash-%{+xxxx.ww}. Logstash uses [Joda formats](http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.md) and the `@timestamp` field of each event is being used as source for the date. The indexing target to write events to. Can point to an [index](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-mgmt.html), [alias](docs-content://manage-data/data-store/aliases.md), or [data stream](docs-content://manage-data/data-store/data-streams.md). This can be dynamic using the `%{{foo}}` syntax. The default value will partition your indices by day so you can more easily delete old data or only search specific date ranges. Indexes may not contain uppercase characters. For weekly indexes ISO 8601 format is recommended, eg. logstash-%{+xxxx.ww}. Logstash uses [Joda formats](http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html) and the `@timestamp` field of each event is being used as source for the date.
### `manage_template` [plugins-outputs-elasticsearch-manage_template] ### `manage_template` [plugins-outputs-elasticsearch-manage_template]
@ -1167,7 +1167,7 @@ Username to authenticate to a secure Elasticsearch cluster
How long to wait before checking for a stale connection to determine if a keepalive request is needed. Consider setting this value lower than the default, possibly to 0, if you get connection errors regularly. How long to wait before checking for a stale connection to determine if a keepalive request is needed. Consider setting this value lower than the default, possibly to 0, if you get connection errors regularly.
This client is based on Apache Commons. Heres how the [Apache Commons documentation](https://hc.apache.org/httpcomponents-client-4.5.x/current/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.md#setValidateAfterInactivity(int)) describes this option: "Defines period of inactivity in milliseconds after which persistent connections must be re-validated prior to being leased to the consumer. Non-positive value passed to this method disables connection validation. This check helps detect connections that have become stale (half-closed) while kept inactive in the pool." This client is based on Apache Commons. Heres how the [Apache Commons documentation](https://hc.apache.org/httpcomponents-client-4.5.x/current/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.html#setValidateAfterInactivity(int)) describes this option: "Defines period of inactivity in milliseconds after which persistent connections must be re-validated prior to being leased to the consumer. Non-positive value passed to this method disables connection validation. This check helps detect connections that have become stale (half-closed) while kept inactive in the pool."
### `version` [plugins-outputs-elasticsearch-version] ### `version` [plugins-outputs-elasticsearch-version]

View file

@ -49,9 +49,9 @@ If you want the full content of your events to be sent as json, you should set t
} }
``` ```
For more information see [https://kafka.apache.org/38/documentation.html#theproducer](https://kafka.apache.org/38/documentation.md#theproducer) For more information see [https://kafka.apache.org/38/documentation.html#theproducer](https://kafka.apache.org/38/documentation.html#theproducer)
Kafka producer configuration: [https://kafka.apache.org/38/documentation.html#producerconfigs](https://kafka.apache.org/38/documentation.md#producerconfigs) Kafka producer configuration: [https://kafka.apache.org/38/documentation.html#producerconfigs](https://kafka.apache.org/38/documentation.html#producerconfigs)
::::{note} ::::{note}
This plugin does not support using a proxy when communicating to the Kafka broker. This plugin does not support using a proxy when communicating to the Kafka broker.
@ -222,7 +222,7 @@ Please note that specifying `jaas_path` and `kerberos_config` in the config file
* Value type is [path](/reference/configuration-file-structure.md#path) * Value type is [path](/reference/configuration-file-structure.md#path)
* There is no default value for this setting. * There is no default value for this setting.
Optional path to kerberos config file. This is krb5.conf style as detailed in [https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.md) Optional path to kerberos config file. This is krb5.conf style as detailed in [https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html)
### `key_serializer` [plugins-outputs-kafka-key_serializer] ### `key_serializer` [plugins-outputs-kafka-key_serializer]
@ -339,7 +339,7 @@ If you choose to set `retries`, a value greater than zero will cause the client
A value less than zero is a configuration error. A value less than zero is a configuration error.
Starting with version 10.5.0, this plugin will only retry exceptions that are a subclass of [RetriableException](https://kafka.apache.org/38/javadoc/org/apache/kafka/common/errors/RetriableException.md) and [InterruptException](https://kafka.apache.org/38/javadoc/org/apache/kafka/common/errors/InterruptException.md). If producing a message throws any other exception, an error is logged and the message is dropped without retrying. This prevents the Logstash pipeline from hanging indefinitely. Starting with version 10.5.0, this plugin will only retry exceptions that are a subclass of [RetriableException](https://kafka.apache.org/38/javadoc/org/apache/kafka/common/errors/RetriableException.html) and [InterruptException](https://kafka.apache.org/38/javadoc/org/apache/kafka/common/errors/InterruptException.html). If producing a message throws any other exception, an error is logged and the message is dropped without retrying. This prevents the Logstash pipeline from hanging indefinitely.
In versions prior to 10.5.0, any exception is retried indefinitely unless the `retries` option is configured. In versions prior to 10.5.0, any exception is retried indefinitely unless the `retries` option is configured.
@ -449,7 +449,7 @@ The Kerberos principal name that Kafka broker runs as. This can be defined eithe
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* Default value is `"GSSAPI"` * Default value is `"GSSAPI"`
[SASL mechanism](http://kafka.apache.org/documentation.md#security_sasl) used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. [SASL mechanism](http://kafka.apache.org/documentation.html#security_sasl) used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.
### `security_protocol` [plugins-outputs-kafka-security_protocol] ### `security_protocol` [plugins-outputs-kafka-security_protocol]

View file

@ -132,7 +132,7 @@ Logstashs default output behaviour is to never lose events As such, we use tc
* Value type is [hash](/reference/configuration-file-structure.md#hash) * Value type is [hash](/reference/configuration-file-structure.md#hash)
* There is no default value for this setting. * There is no default value for this setting.
A Hash to set Riemann event fields ([http://riemann.io/concepts.html](http://riemann.io/concepts.md)). A Hash to set Riemann event fields ([http://riemann.io/concepts.html](http://riemann.io/concepts.html)).
The following event fields are supported: `description`, `state`, `metric`, `ttl`, `service` The following event fields are supported: `description`, `state`, `metric`, `ttl`, `service`

View file

@ -142,7 +142,7 @@ This plugin uses the AWS SDK and supports several ways to get credentials, which
* Value type is [hash](/reference/configuration-file-structure.md#hash) * Value type is [hash](/reference/configuration-file-structure.md#hash)
* Default value is `{}` * Default value is `{}`
Key-value pairs of settings and corresponding values used to parametrize the connection to S3. See full list in [the AWS SDK documentation](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.md). Example: Key-value pairs of settings and corresponding values used to parametrize the connection to S3. See full list in [the AWS SDK documentation](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html). Example:
```ruby ```ruby
output { output {
@ -264,7 +264,7 @@ Delay (in seconds) to wait between consecutive retries on upload failures.
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting. * There is no default value for this setting.
The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.md) for more information. The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) for more information.
### `role_session_name` [plugins-outputs-s3-role_session_name] ### `role_session_name` [plugins-outputs-s3-role_session_name]
@ -340,7 +340,7 @@ Set the file size in bytes. When the number of bytes exceeds the `size_file` val
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting. * There is no default value for this setting.
The key to use when specified along with server_side_encryption ⇒ aws:kms. If server_side_encryption ⇒ aws:kms is set but this is not default KMS key is used. [http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.md) The key to use when specified along with server_side_encryption ⇒ aws:kms. If server_side_encryption ⇒ aws:kms is set but this is not default KMS key is used. [http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html)
### `storage_class` [plugins-outputs-s3-storage_class] ### `storage_class` [plugins-outputs-s3-storage_class]
@ -348,7 +348,7 @@ The key to use when specified along with server_side_encryption ⇒ aws:kms. If
* Value can be any of: `STANDARD`, `REDUCED_REDUNDANCY`, `STANDARD_IA`, `ONEZONE_IA`, `INTELLIGENT_TIERING`, `GLACIER`, `DEEP_ARCHIVE`, `OUTPOSTS`, `GLACIER_IR`, `SNOW`, `EXPRESS_ONEZONE` * Value can be any of: `STANDARD`, `REDUCED_REDUNDANCY`, `STANDARD_IA`, `ONEZONE_IA`, `INTELLIGENT_TIERING`, `GLACIER`, `DEEP_ARCHIVE`, `OUTPOSTS`, `GLACIER_IR`, `SNOW`, `EXPRESS_ONEZONE`
* Default value is `"STANDARD"` * Default value is `"STANDARD"`
Specifies what S3 storage class to use when uploading the file. More information about the different storage classes can be found: [http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html](http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.md) Defaults to STANDARD. Specifies what S3 storage class to use when uploading the file. More information about the different storage classes can be found: [http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html](http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) Defaults to STANDARD.
### `temporary_directory` [plugins-outputs-s3-temporary_directory] ### `temporary_directory` [plugins-outputs-s3-temporary_directory]

View file

@ -139,7 +139,7 @@ The endpoint to connect to. By default it is constructed using the value of `reg
* Value type is [bytes](/reference/configuration-file-structure.md#bytes) * Value type is [bytes](/reference/configuration-file-structure.md#bytes)
* Default value is `"256KiB"` * Default value is `"256KiB"`
The maximum number of bytes for any message sent to SQS. Messages exceeding this size will be dropped. See [http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-messages.html](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-messages.md). The maximum number of bytes for any message sent to SQS. Messages exceeding this size will be dropped. See [http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-messages.html](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-messages.html).
### `proxy_uri` [plugins-outputs-sqs-proxy_uri] ### `proxy_uri` [plugins-outputs-sqs-proxy_uri]
@ -180,7 +180,7 @@ The AWS Region
* Value type is [string](/reference/configuration-file-structure.md#string) * Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting. * There is no default value for this setting.
The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.md) for more information. The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) for more information.
### `role_session_name` [plugins-outputs-sqs-role_session_name] ### `role_session_name` [plugins-outputs-sqs-role_session_name]

View file

@ -20,7 +20,7 @@ For questions about the plugin, open a topic in the [Discuss](http://discuss.ela
## Description [_description_119] ## Description [_description_119]
This plugin sends Logstash events into files in HDFS via the [webhdfs](https://hadoop.apache.org/docs/r1.0.4/webhdfs.md) REST API. This plugin sends Logstash events into files in HDFS via the [webhdfs](https://hadoop.apache.org/docs/r1.0.4/webhdfs.html) REST API.
## Dependencies [_dependencies] ## Dependencies [_dependencies]

View file

@ -41,8 +41,8 @@ Some fields in {{ls}} events are reserved, or are required to adhere to a certai
| | | | | |
| --- | --- | | --- | --- |
| [`@metadata`](/reference/event-dependent-configuration.md#metadata) | A key/value map.<br>Ruby-based Plugin API: value is an[org.jruby.RubyHash](https://javadoc.io/static/org.jruby/jruby-core/9.2.5.0/org/jruby/RubyHash.md).<br>Java-based Plugin API: value is an[org.logstash.ConvertedMap](https://github.com/elastic/logstash/blob/main/logstash-core/src/main/java/org/logstash/ConvertedMap.java).<br>In serialized form (such as JSON): a key/value map where the keys must bestrings and the values are not constrained to a particular type. | | [`@metadata`](/reference/event-dependent-configuration.md#metadata) | A key/value map.<br>Ruby-based Plugin API: value is an[org.jruby.RubyHash](https://javadoc.io/static/org.jruby/jruby-core/9.2.5.0/org/jruby/RubyHash.html).<br>Java-based Plugin API: value is an[org.logstash.ConvertedMap](https://github.com/elastic/logstash/blob/main/logstash-core/src/main/java/org/logstash/ConvertedMap.java).<br>In serialized form (such as JSON): a key/value map where the keys must bestrings and the values are not constrained to a particular type. |
| `@timestamp` | An object holding representation of a specific moment in time.<br>Ruby-based Plugin API: value is an[org.jruby.RubyTime](https://javadoc.io/static/org.jruby/jruby-core/9.2.5.0/org/jruby/RubyTime.md).<br>Java-based Plugin API: value is a[java.time.Instant](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/Instant.md).<br>In serialized form (such as JSON) or when setting with Event#set: anISO8601-compliant String value is acceptable. | | `@timestamp` | An object holding representation of a specific moment in time.<br>Ruby-based Plugin API: value is an[org.jruby.RubyTime](https://javadoc.io/static/org.jruby/jruby-core/9.2.5.0/org/jruby/RubyTime.html).<br>Java-based Plugin API: value is a[java.time.Instant](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/Instant.html).<br>In serialized form (such as JSON) or when setting with Event#set: anISO8601-compliant String value is acceptable. |
| `@version` | A string, holding an integer value. | | `@version` | A string, holding an integer value. |
| `tags` | An array of distinct strings | | `tags` | An array of distinct strings |

View file

@ -71,10 +71,10 @@ When tuning Logstash you may have to adjust the heap size. You can use the [Visu
In the first example we see that the CPU isnt being used very efficiently. In fact, the JVM is often times having to stop the VM for “full GCs”. Full garbage collections are a common symptom of excessive memory pressure. This is visible in the spiky pattern on the CPU chart. In the more efficiently configured example, the GC graph pattern is more smooth, and the CPU is used in a more uniform manner. You can also see that there is ample headroom between the allocated heap size, and the maximum allowed, giving the JVM GC a lot of room to work with. In the first example we see that the CPU isnt being used very efficiently. In fact, the JVM is often times having to stop the VM for “full GCs”. Full garbage collections are a common symptom of excessive memory pressure. This is visible in the spiky pattern on the CPU chart. In the more efficiently configured example, the GC graph pattern is more smooth, and the CPU is used in a more uniform manner. You can also see that there is ample headroom between the allocated heap size, and the maximum allowed, giving the JVM GC a lot of room to work with.
Examining the in-depth GC statistics with a tool similar to the excellent [VisualGC](https://visualvm.github.io/plugins.md) plugin shows that the over-allocated VM spends very little time in the efficient Eden GC, compared to the time spent in the more resource-intensive Old Gen “Full” GCs. Examining the in-depth GC statistics with a tool similar to the excellent [VisualGC](https://visualvm.github.io/plugins.html) plugin shows that the over-allocated VM spends very little time in the efficient Eden GC, compared to the time spent in the more resource-intensive Old Gen “Full” GCs.
::::{note} ::::{note}
As long as the GC pattern is acceptable, heap sizes that occasionally increase to the maximum are acceptable. Such heap size spikes happen in response to a burst of large events passing through the pipeline. In general practice, maintain a gap between the used amount of heap memory and the maximum. This document is not a comprehensive guide to JVM GC tuning. Read the official [Oracle guide](http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.md) for more information on the topic. We also recommend reading [Debugging Java Performance](https://www.semicomplete.com/blog/geekery/debugging-java-performance/). As long as the GC pattern is acceptable, heap sizes that occasionally increase to the maximum are acceptable. Such heap size spikes happen in response to a burst of large events passing through the pipeline. In general practice, maintain a gap between the used amount of heap memory and the maximum. This document is not a comprehensive guide to JVM GC tuning. Read the official [Oracle guide](http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.html) for more information on the topic. We also recommend reading [Debugging Java Performance](https://www.semicomplete.com/blog/geekery/debugging-java-performance/).
:::: ::::