diff --git a/docs/static/include/pluginbody.asciidoc b/docs/static/include/pluginbody.asciidoc index a51c4f74a..bb44db58c 100644 --- a/docs/static/include/pluginbody.asciidoc +++ b/docs/static/include/pluginbody.asciidoc @@ -883,7 +883,7 @@ time. **Version messaging from Logstash** -If you start Logstash with the `--verbose` flag, you will see messages like +If you start Logstash with the `--log.level verbose` flag, you will see messages like these to indicate the relative maturity indicated by the plugin version number: ** **0.1.x** diff --git a/docs/static/life-of-an-event.asciidoc b/docs/static/life-of-an-event.asciidoc index 1a0d36438..a4383d955 100644 --- a/docs/static/life-of-an-event.asciidoc +++ b/docs/static/life-of-an-event.asciidoc @@ -83,7 +83,7 @@ For more information about the available codecs, see Logstash keeps all events in main memory during processing. Logstash responds to a SIGTERM by attempting to halt inputs and waiting for pending events to finish processing before shutting down. When the pipeline cannot be flushed due to a stuck output or filter, Logstash waits indefinitely. For example, when a pipeline sends output to a database that is unreachable by the Logstash instance, the instance waits indefinitely after receiving a SIGTERM. -To enable Logstash to detect these situations and terminate with a stalled pipeline, use the `--allow-unsafe-shutdown` flag. +To enable Logstash to detect these situations and terminate with a stalled pipeline, use the `--pipeline.unsafe_shutdown` flag. WARNING: Unsafe shutdowns, force-kills of the Logstash process, or crashes of the Logstash process for any other reason result in data loss. Shut down Logstash safely whenever possible. diff --git a/docs/static/plugin-manager.asciidoc b/docs/static/plugin-manager.asciidoc index 33aee5089..cd49145cd 100644 --- a/docs/static/plugin-manager.asciidoc +++ b/docs/static/plugin-manager.asciidoc @@ -57,14 +57,14 @@ bin/logstash-plugin install /path/to/logstash-output-kafka-1.0.0.gem [[installing-local-plugins-path]] [float] -==== Advanced: Using `--pluginpath` +==== Advanced: Using `--path.plugins` -Using the `--pluginpath` flag, you can load a plugin source code located on your file system. Typically this is used by +Using the Logstash `--path.plugins` flag, you can load a plugin source code located on your file system. Typically this is used by developers who are iterating on a custom plugin and want to test it before creating a ruby gem. [source,shell] ---------------------------------- -bin/logstash --pluginpath /opt/shared/lib/logstash/input/my-custom-plugin-code.rb +bin/logstash --path.plugins /opt/shared/lib/logstash/input/my-custom-plugin-code.rb ---------------------------------- [[updating-plugins]] diff --git a/docs/static/reloading-config.asciidoc b/docs/static/reloading-config.asciidoc index 22f3118aa..22cc383d7 100644 --- a/docs/static/reloading-config.asciidoc +++ b/docs/static/reloading-config.asciidoc @@ -4,19 +4,19 @@ Starting with Logstash 2.3, you can set Logstash to detect and reload configuration changes automatically. -To enable automatic config reloading, start Logstash with the `--auto-reload` (or `-r`) +To enable automatic config reloading, start Logstash with the `--config.reload.automatic` (or `-r`) command-line option specified. For example: [source,shell] ---------------------------------- -bin/logstash –f apache.config --auto-reload +bin/logstash –f apache.config --config.reload.automatic ---------------------------------- -NOTE: The `--auto-reload` option is not available when you specify the `-e` flag to pass +NOTE: The `--config.reload.automatic` option is not available when you specify the `-e` flag to pass in configuration settings from the command-line. By default, Logstash checks for configuration changes every 3 seconds. To change this interval, -use the `--reload-interval ` option, where `seconds` specifies how often Logstash +use the `----config.reload.interval ` option, where `seconds` specifies how often Logstash checks the config files for changes. If Logstash is already running without auto-reload enabled, you can force Logstash to diff --git a/docs/static/stalled-shutdown.asciidoc b/docs/static/stalled-shutdown.asciidoc index e1b75ebb7..14fde1ee3 100644 --- a/docs/static/stalled-shutdown.asciidoc +++ b/docs/static/stalled-shutdown.asciidoc @@ -20,19 +20,19 @@ Logstash has a stall detection mechanism that analyzes the behavior of the pipel This mechanism produces periodic information about the count of inflight events in internal queues and a list of busy worker threads. -To enable Logstash to forcibly terminate in the case of a stalled shutdown, use the `--allow-unsafe-shutdown` flag when +To enable Logstash to forcibly terminate in the case of a stalled shutdown, use the `--pipeline.unsafe_shutdown` flag when you start Logstash. [[shutdown-stall-example]] ==== Stall Detection Example In this example, slow filter execution prevents the pipeline from clean shutdown. By starting Logstash with the -`--allow-unsafe-shutdown` flag, quitting with *Ctrl+C* results in an eventual shutdown that loses 20 events. +`--pipeline.unsafe_shutdown` flag, quitting with *Ctrl+C* results in an eventual shutdown that loses 20 events. ======== [source,shell] % bin/logstash -e 'input { generator { } } filter { ruby { code => "sleep 10000" } } \ - output { stdout { codec => dots } }' -w 1 --allow-unsafe-shutdown + output { stdout { codec => dots } }' -w 1 --pipeline.unsafe_shutdown Default settings used: Filter workers: 1 Logstash startup completed ^CSIGINT received. Shutting down the pipeline. {:level=>:warn} @@ -60,4 +60,4 @@ The shutdown process appears to be stalled due to busy or blocked plugins. Check Forcefully quitting logstash.. {:level=>:fatal} ======== -When `--allow-unsafe-shutdown` isn't enabled, Logstash continues to run and produce these reports periodically. \ No newline at end of file +When `--pipeline.unsafe_shutdown` isn't enabled, Logstash continues to run and produce these reports periodically. \ No newline at end of file diff --git a/docs/static/upgrading.asciidoc b/docs/static/upgrading.asciidoc index d30e2797c..f466673f3 100644 --- a/docs/static/upgrading.asciidoc +++ b/docs/static/upgrading.asciidoc @@ -17,7 +17,7 @@ This procedure uses <> to upgrade Logstas 2. Using the directions in the _Package Repositories_ section, update your repository links to point to the 2.0 repositories instead of the previous version. 3. Run the `apt-get upgrade logstash` or `yum update logstash` command as appropriate for your operating system. -4. Test your configuration file with the `logstash --configtest -f ` command. Configuration options for +4. Test your configuration file with the `logstash --config.test_and_exit -f ` command. Configuration options for some Logstash plugins have changed in the 2.0 release. 5. Restart your Logstash pipeline after updating your configuration file. @@ -28,7 +28,7 @@ This procedure downloads the relevant Logstash binaries directly from Elastic. 1. Shut down your Logstash pipeline, including any inputs that send events to Logstash. 2. Download the https://www.elastic.co/downloads/logstash[Logstash installation file] that matches your host environment. 3. Unpack the installation file into your Logstash directory. -4. Test your configuration file with the `logstash --configtest -f ` command. Configuration options for +4. Test your configuration file with the `logstash --config.test_and_exit -f ` command. Configuration options for some Logstash plugins have changed in the 2.0 release. 5. Restart your Logstash pipeline after updating your configuration file. @@ -91,8 +91,8 @@ The default batch size of the pipeline is 125 events per worker. This will by de used for the elasticsearch output. The Elasticsearch output's `flush_size` now acts only as a maximum bulk size (still defaulting to 500). For example, if your pipeline batch size is 3000 events, Elasticsearch Output will send 500 events at a time, in 6 separate bulk requests. In other words, for Elasticsearch output, -bulk request size is chunked based on `flush_size` and `--pipeline-batch-size`. If `flush_size` is set greater -than `--pipeline-batch-size`, it is ignored and `--pipeline-batch-size` will be used. +bulk request size is chunked based on `flush_size` and `--pipeline.batch.size`. If `flush_size` is set greater +than `--pipeline.batch.size`, it is ignored and `--pipeline.batch.size` will be used. The default number of output workers in Logstash 2.2 is now equal to the number of pipeline workers (`-w`) unless overridden in the Logstash config file. This can be problematic for some users as the