[[logging]] === Logging Logstash emits internal logs during its operation, which are placed in `LS_HOME/logs` (or `/var/log/logstash` for DEB/RPM). The default logging level is `INFO`. Logstash's logging framework is based on http://logging.apache.org/log4j/2.x/[Log4j 2 framework], and much of its functionality is exposed directly to users. You can configure logging for a particular subsystem, module, or plugin. When you need to debug problems, particularly problems with plugins, consider increasing the logging level to `DEBUG` to get more verbose messages. For example, if you are debugging issues with Elasticsearch Output, you can increase log levels just for that component. This approach reduces noise from excessive logging and helps you focus on the problem area. You can configure logging using the `log4j2.properties` file or the Logstash API. * *`log4j2.properties` file.* Changes made through the `log4j2.properties` file require you to restart Logstash for the changes to take effect. Changes *persist* through subsequent restarts. * *Logging API.* Changes made through the Logging API are effective immediately without a restart. The changes *do not persist* after Logstash is restarted. [[log4j2]] ==== Log4j2 configuration Logstash ships with a `log4j2.properties` file with out-of-the-box settings. You can modify this file to change the rotation policy, type, and other https://logging.apache.org/log4j/2.x/manual/configuration.html#Loggers[log4j2 configuration]. You must restart Logstash to apply any changes that you make to this file. Changes to `log4j2.properties` persist after Logstash is restarted. Here's an example using `outputs.elasticsearch`: [source,yaml] -------------------------------------------------- logger.elasticsearchoutput.name = logstash.outputs.elasticsearch logger.elasticsearchoutput.level = debug -------------------------------------------------- ===== Retrieve list of logging configurations To retrieve a list of logging subsystems available at runtime, you can do a `GET` request to `_node/logging` [source,js] -------------------------------------------------- curl -XGET 'localhost:9600/_node/logging?pretty' -------------------------------------------------- Example response: ["source","js"] -------------------------------------------------- { ... "loggers" : { "logstash.agent" : "INFO", "logstash.api.service" : "INFO", "logstash.basepipeline" : "INFO", "logstash.codecs.plain" : "INFO", "logstash.codecs.rubydebug" : "INFO", "logstash.filters.grok" : "INFO", "logstash.inputs.beats" : "INFO", "logstash.instrument.periodicpoller.jvm" : "INFO", "logstash.instrument.periodicpoller.os" : "INFO", "logstash.instrument.periodicpoller.persistentqueue" : "INFO", "logstash.outputs.stdout" : "INFO", "logstash.pipeline" : "INFO", "logstash.plugins.registry" : "INFO", "logstash.runner" : "INFO", "logstash.shutdownwatcher" : "INFO", "org.logstash.Event" : "INFO", "slowlog.logstash.codecs.plain" : "TRACE", "slowlog.logstash.codecs.rubydebug" : "TRACE", "slowlog.logstash.filters.grok" : "TRACE", "slowlog.logstash.inputs.beats" : "TRACE", "slowlog.logstash.outputs.stdout" : "TRACE" } } -------------------------------------------------- ==== Logging APIs For temporary logging changes, modifying the `log4j2.properties` file and restarting Logstash leads to unnecessary downtime. Instead, you can dynamically update logging levels through the logging API. These settings are effective immediately and do not need a restart. NOTE: By default, the logging API attempts to bind to `tcp:9600`. If this port is already in use by another Logstash instance, you need to launch Logstash with the `--http.port` flag specified to bind to a different port. See <> for more information. ===== Update logging levels Prepend the name of the subsystem, module, or plugin with `logger.`. Here is an example using `outputs.elasticsearch`: [source,js] -------------------------------------------------- curl -XPUT 'localhost:9600/_node/logging?pretty' -H 'Content-Type: application/json' -d' { "logger.logstash.outputs.elasticsearch" : "DEBUG" } ' -------------------------------------------------- While this setting is in effect, Logstash emits DEBUG-level logs for __all__ the Elasticsearch outputs specified in your configuration. Please note this new setting is transient and will not survive a restart. NOTE: If you want logging changes to persist after a restart, add them to `log4j2.properties` instead. ===== Reset dynamic logging levels To reset any logging levels that may have been dynamically changed via the logging API, send a `PUT` request to `_node/logging/reset`. All logging levels will revert to the values specified in the `log4j2.properties` file. [source,js] -------------------------------------------------- curl -XPUT 'localhost:9600/_node/logging/reset?pretty' -------------------------------------------------- ==== Log file location You can specify the log file location using `--path.logs` setting. ==== Slowlog Slow-log for Logstash adds the ability to log when a specific event takes an abnormal amount of time to make its way through the pipeline. Just like the normal application log, you can find slow-logs in your `--path.logs` directory. Slowlog is configured in the `logstash.yml` settings file with the following options: [source,yaml] ------------------------------ slowlog.threshold.warn (default: -1) slowlog.threshold.info (default: -1) slowlog.threshold.debug (default: -1) slowlog.threshold.trace (default: -1) ------------------------------ By default, these values are set to `-1nanos` to represent an infinite threshold where no slowlog will be invoked. These `slowlog.threshold` fields are configured using a time-value format which enables a wide range of trigger intervals. The positive numeric ranges can be specified using the following time units: `nanos` (nanoseconds), `micros` (microseconds), `ms` (milliseconds), `s` (second), `m` (minute), `h` (hour), `d` (day). Here is an example: [source,yaml] ------------------------------ slowlog.threshold.warn: 2s slowlog.threshold.info: 1s slowlog.threshold.debug: 500ms slowlog.threshold.trace: 100ms ------------------------------ In the above configuration, events that take longer than two seconds to be processed within a filter will be logged. The logs will include the full event and filter configuration that are responsible for the slowness.