[[logstash-modules]] == Working with Logstash Modules Logstash modules provide a quick, end-to-end solution for ingesting data and visualizing it with purpose-built dashboards. These modules are available: * <> * <> * <> * <> Each module comes pre-packaged with Logstash configurations, Kibana dashboards, and other meta files that make it easier for you to set up the Elastic Stack for specific use cases or data sources. You can think of modules as providing three essential functions that make it easier for you to get started. When you run a module, it will: . Create the Elasticsearch index. . Set up the Kibana dashboards, including the index pattern, searches, and visualizations required to visualize your data in Kibana. . Run the Logstash pipeline with the configurations required to read and parse the data. image::static/images/logstash-module-overview.png[Logstash modules overview] [float] [[running-logstash-modules]] === Running modules To run a module and set up dashboards, you specify the following options: [source,shell] ---- bin/logstash --modules MODULE_NAME --setup [-M "CONFIG_SETTING=VALUE"] ---- //TODO: For 6.0, show how to run multiple modules Where: * `--modules` runs the Logstash module specified by `MODULE_NAME`. * `-M "CONFIG_SETTING=VALUE"` is optional and overrides the specified configuration setting. You can specify multiple overrides. Each override must start with `-M`. See <> for more info. * `--setup` creates an index pattern in Elasticsearch and imports Kibana dashboards and visualizations. Running `--setup` is a one-time setup step. Omit this option for subsequent runs of the module to avoid overwriting existing Kibana dashboards. For example, the following command runs the Netflow module with the default settings, and sets up the netflow index pattern and dashboards: [source,shell] ---- bin/logstash --modules netflow --setup ---- The following command runs the Netflow module and overrides the Elasticsearch `host` setting. Here it's assumed that you've already run the setup step. [source,shell] ---- bin/logstash --modules netflow -M "netflow.var.elasticsearch.host=es.mycloud.com" ---- [float] [[configuring-logstash-modules]] === Configuring modules To configure a module, you can either <> in the `logstash.yml` <>, or use command-line overrides to <>. [float] [[setting-logstash-module-config]] ==== Specify module settings in `logstash.yml` To specify module settings in the `logstash.yml` <> file, you add a module definition to the modules array. Each module definition begins with a dash (-) and is followed by `name: module_name` then a series of name/value pairs that specify module settings. For example: [source,shell] ---- modules: - name: netflow var.elasticsearch.hosts: "es.mycloud.com" var.elasticsearch.username: "foo" var.elasticsearch.password: "password" var.kibana.host: "kb.mycloud.com" var.kibana.username: "foo" var.kibana.password: "password" var.input.tcp.port: 5606 ---- For a list of available module settings, see the documentation for the module. [float] [[overriding-logstash-module-settings]] ==== Specify module settings at the command line You can override module settings by specifying one or more configuration overrides when you start Logstash. To specify an override, you use the `-M` command line option: [source,shell] ---- -M MODULE_NAME.var.PLUGINTYPE1.PLUGINNAME1.KEY1=VALUE ---- Notice that the fully-qualified setting name includes the module name. You can specify multiple overrides. Each override must start with `-M`. The following command runs the Netflow module and overrides both the Elasticsearch `host` setting and the `udp.port` setting: [source,shell] ---- bin/logstash --modules netflow -M "netflow.var.input.udp.port=3555" -M "netflow.var.elasticsearch.hosts=my-es-cloud" ---- Any settings defined in the command line are ephemeral and will not persist across subsequent runs of Logstash. If you want to persist a configuration, you need to set it in the `logstash.yml` <>. Settings that you specify at the command line are merged with any settings specified in the `logstash.yml` file. If an option is set in both places, the value specified at the command line takes precedence. [[connecting-to-cloud]] === Using Elastic Cloud Logstash comes with two settings that simplify using modules with https://cloud.elastic.co/[Elastic Cloud]. The Elasticsearch and Kibana hostnames in Elastic Cloud may be hard to set in the Logstash config or on the commandline, so a Cloud ID can be used instead. NOTE: Cloud ID applies only when a Logstash module is enabled, otherwise specifying Cloud ID has no effect. Cloud ID applies to data that gets sent via the module, to runtime metrics sent via {xpack} monitoring, and to the endpoint used by {xpack} central management features of Logstash, unless explicit overrides to {xpack} settings are specified in `logstash.yml`. ==== Cloud ID The Cloud ID, which can be found in the Elastic Cloud web console, is used by Logstash to build the Elasticsearch and Kibana hosts settings. It is a base64 encoded text value of about 120 characters made up of upper and lower case letters and numbers. If you have several Cloud IDs, you can add a label, which is ignored internally, to help you tell them apart. To add a label you should prefix your Cloud ID with a label and a `:` separator in this format "