Doc: Remove local k8s files (#17547)

This commit is contained in:
Karen Metts 2025-04-17 19:20:41 -04:00 committed by GitHub
parent f91f5a692d
commit b519cf4213
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
23 changed files with 0 additions and 2076 deletions

View file

@ -1,9 +0,0 @@
[[ls-k8s-administering]]
== Administering {ls} and Kubernetes
++++
<titleabbrev>Administering</titleabbrev>
++++
WARNING: This documentation is still in development and may be changed or removed in a future release.
These pages describe the steps to take after you've gotten your system <<ls-k8s-setting-up,up and running>>. These include both routine tasks to manage and maintain your {ls} and Kubernetes resources, as well as recommended "hardening" steps, such as setting up security and external health monitoring, that prepare your environment for production.

View file

@ -1,6 +0,0 @@
[[ls-k8s-logging]]
=== {ls} logging
WARNING: This documentation is still in development and may be changed or removed in a future release.
Logging...

View file

@ -1,19 +0,0 @@
[[ls-k8s-monitor-elastic-cloud]]
==== Ship metrics to Elastic Cloud
TIP: Be sure that you have the Elastic CustomResourceDefinitions (CRDs) installed so that you can follow the example. Check out <<qs-set-up>> for set up info.
You can configure {metricbeat} to send monitoring data to a hosted {ess} on https://cloud.elastic.co/[Elastic Cloud]. To send to Elastic Cloud, remove the `elasticsearchRef` from the `spec` and set the `cloud.id` and `cloud.auth` for your https://cloud.elastic.co/[Elastic Cloud] monitoring cluster in the `spec.config` section of the {metricbeat} configuration.
[source,yaml]
--
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: metricbeat
spec:
config:
cloud.id: CLOUD_ID
cloud.auth: CLOUD_AUTH
...
--

View file

@ -1,21 +0,0 @@
[[ls-k8s-monitor-external]]
==== Ship metrics to external {es} cluster
TIP: Be sure that you have the Elastic CustomResourceDefinitions (CRDs) installed so that you can follow the example. Check out <<qs-set-up>> for set up info.
Metrics can be sent to an {es} cluster that is not managed by ECK. To configure {metricbeat}, remove the `elasticsearchRef` from the specification and include an output configuration in the `spec.config`.
[source,yaml]
--
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: metricbeat
spec:
config:
output.elasticsearch:
hosts: ["https://es:9200"]
username: "elastic"
password: "changeme"
...
--

View file

@ -1,126 +0,0 @@
[[ls-k8s-stack-monitoring]]
=== Stack monitoring
WARNING: This documentation is still in development and may be changed or removed in a future release.
An important step to making your environment production ready is to configure stack monitoring. Monitoring metrics can be sent to an external resource, such as {ess} or {eck}, so that in the event that any components of your environment become unresponsive, your monitoring data is available.
To enable {logstash-ref}/monitoring-with-metricbeat.html[Stack monitoring] for {ls}, you need {metricbeat} to collect {ls} metrics, {es} to store the metrics and {kib} to view the result.
[[monitor-with-ECK]]
==== Stack monitoring with Elastic Cloud on {k8s} (ECK)
TIP: Be sure that you have ECK installed so that you can follow the example. Check out <<qs-set-up>> for set up info.
For these examples, we will be modifying the Beats stack monitoring link:https://github.com/elastic/cloud-on-k8s/blob/main/config/recipes/beats/stack_monitoring.yaml[recipe] from the ECK examples.
This example initiates a production {es} cluster, a monitoring {es} cluster, {filebeat}, {metricbeat}, a production Kibana and a monitoring Kibana. It monitors {es} and Kibana and sends metrics to the monitoring cluster.
We use {metricbeat-ref}/configuration-autodiscover.html[autodiscover] to configure monitoring for multiple {ls} instances.
* <<ls-k8s-monitor-config-metricbeat>>
* <<ls-k8s-monitor-config-ls>>
* <<ls-k8s-monitor-kibana>>
[float]
[[ls-k8s-monitor-config-metricbeat]]
===== Configure Metricbeat
To monitor {ls}, add the `Logstash` module to the recipe.
[source,yaml]
--
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: metricbeat
spec:
type: metricbeat
version: 8.4.2
elasticsearchRef:
name: elasticsearch-monitoring <1>
config:
metricbeat:
autodiscover:
providers:
- type: kubernetes
scope: cluster
hints.enabled: true
templates:
- condition:
contains:
kubernetes.labels.app: ls <2>
config:
- module: logstash <3>
metricsets:
- node
- node_stats
period: 10s
hosts: "http://${data.host}:9600"
xpack.enabled: true
...
--
<1> {metricbeat} sends metrics to `elasticsearch-monitoring` cluster.
<2> {metricbeat} scans for the pods with label `app: ls` to collect {ls} metrics.
<3> {metricbeat} logstash module calls metric endpoint of each {ls} from port `9600` for every `10` seconds.
[float]
[[ls-k8s-monitor-config-ls]]
===== Configure {ls}
Add label `app: ls` to `Deployment` for autodiscover.
[source,yaml]
--
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
labels:
app: ls
...
--
After you have configured {metricbeat} and {ls}, the configurations are ready to deploy. Go to <<ls-k8s-monitor-kibana>> for info on how to confirm that everything is working.
[float]
[[kibana-metrics]]
====== Show {kib} metrics in the same {es} cluster (optional)
By default {ls} metrics will be shown in a standalone cluster. To associate data with the same cluster of {es} and {kib}, provide the `cluster_uuid` of the production {es} cluster to `monitoring.cluster_uuid` in logstash.yml.
[source,yaml]
--
apiVersion: v1
data:
logstash.yml: |
api.http.host: "0.0.0.0"
monitoring.cluster_uuid: PRODUCTION_ES_CLUSTER_UUID
kind: ConfigMap
metadata:
name: logstash-config
--
To get the `cluster_uuid`, go to {kib} > Stack Monitoring page. The URL in the browser shows the uuid in the form of `cluster_uuid:PRODUCTION_ES_CLUSTER_UUID,`.
[float]
[[ls-k8s-monitor-kibana]]
===== View monitoring data in {kib}
When everything is set, the {kib} > Stack Monitoring page will show the Logstash data.
To access {kib} by `https://localhost:5601`, set the port forwarding:
[source,sh]
--
kubectl port-forward service/kibana-monitoring-kb-http 5601
--
Get the login password:
[source,sh]
--
kubectl get secret elasticsearch-monitoring-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
--
image::./images/sm-kibana.png[Stack Monitoring screenshot]

View file

@ -1,6 +0,0 @@
[[ls-k8s-upgrade]]
=== Upgrade {ls}
WARNING: This documentation is still in development and may be changed or removed in a future release.
We have a number of recommendations about how to upgrade {ls} in a production environment, so as to minimize and mitigate the impact of any potential downtime.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 276 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 541 KiB

View file

@ -1,70 +0,0 @@
[[logstash-and-kubernetes]]
= Logstash and Kubernetes Reference
include::{docs-root}/shared/versions/stack/{source_branch}.asciidoc[]
include::{docs-root}/shared/attributes.asciidoc[]
[[introduction]]
== Introduction
WARNING: This documentation is still in development and may be changed or removed in a future release.
This guide helps you to run and work with {ls} in a Kubernetes environment.
Are you trying out {ls} for the first time? We recommend beginning with our guide {logstash-ref}/getting-started-with-logstash.html[Getting Started with Logstash].
If you're already familiar with Logstash, then it's time to try it out in Kubernetes. The <<ls-k8s-quick-start,Getting started with Logstash and Kubernetes>> demo guides you through the steps of configuring Logstash inside a running Kubernetes cluster.
// Logstash and Kubernetes Quick start
include::quick-start/ls-k8s-quick-start.asciidoc[]
// List of sample configuration files and what they're used for
include::quick-start/sample-configuration-files.asciidoc[]
// Logstash and Kubernetes Quick start
include::quick-start/ls-k8s-configuration-files.asciidoc[]
// Setting up
include::setting-up/ls-k8s-setting-up.asciidoc[]
// Persistent storage requirements
include::setting-up/ls-k8s-persistent-storage.asciidoc[]
// Designing your installation based on plugin usage
include::setting-up/ls-k8s-design-for-plugins.asciidoc[]
// Sizing Logstash instances
include::setting-up/ls-k8s-sizing.asciidoc[]
// Secure your environment
include::setting-up/ls-k8s-secure.asciidoc[]
// Administering
include::administering/ls-k8s-administering.asciidoc[]
// Stack Monitoring
include::administering/ls-k8s-stack-monitoring.asciidoc[]
// Stack Monitoring external
include::administering/ls-k8s-stack-monitoring-external.asciidoc[]
// Stack Monitoring Elastic Cloud
include::administering/ls-k8s-stack-monitoring-cloud.asciidoc[]
// Upgrade Logstash
include::administering/ls-k8s-upgrade.asciidoc[]
// Logstash logging
include::administering/ls-k8s-logging.asciidoc[]
// Recipes
include::ls-k8s-recipes.asciidoc[]
// Troubleshooting
include::troubleshooting/ls-k8s-troubleshooting.asciidoc[]
// Common problems
include::troubleshooting/ls-k8s-common-problems.asciidoc[]
// Troubleshooting methods
include::troubleshooting/ls-k8s-troubleshooting-methods.asciidoc[]

View file

@ -1,17 +0,0 @@
[[ls-k8s-recipes]]
== Recipes
WARNING: This documentation is still in development and may be changed or removed in a future release.
We've compiled a number of recipes to support common use cases for running Logstash in Kubernetes.
Refer to the following sections in the Logstash GitHub repo for sample files that you can use as templates. Details for each recipe can be found in the associated README files.
link:https://www.google.com[Recipe name]::
Brief description.
link:https://www.google.com[Another recipe name]::
Brief description.
link:https://www.google.com[Yet another recipe name]::
Brief description.

View file

@ -1,307 +0,0 @@
[[ls-k8s-configuration-files]]
=== Logstash configuration files in Kubernetes
WARNING: This documentation is still in development. This feature may be changed or removed in a future release.
This guide walks you through configuring {ls} and setting up {ls} pipelines in {k8s}.
* <<qs-pipeline-configuration>>
* <<qs-logstash-yaml>>
* <<qs-jvm-options>>
* <<qs-logging>>
{ls} uses two types of configuration files:
* _pipeline configuration files_, which define the Logstash processing pipeline
* _settings files_ which specify options that control {ls} startup and execution.
{logstash-ref}/config-setting-files.html[{ls} configuration files] topic contains information on these files.
This guide explains how these map to a {k8s} configuration.
[discrete]
[[qs-pipeline-configuration]]
=== Pipeline configuration
This section explains how to configure single and multiple pipeline {ls} configurations.
Note that this section does not cover using {logstash-ref}/logstash-centralized-pipeline-management.html[Centralized Pipeline Management].
Each of these configurations requires creating one or more `ConfigMap` definitions to define the pipeline, creating a volume to be made available to the Logstash container, and then mounting the definition in these volumes
[discrete]
[[qs-single-pipeline-config]]
==== Single pipeline
The {ls} {logstash-ref}/docker.html[existing docker image] contains a default `pipeline.yml`, which expects a single pipeline, with the definition of that pipeline present in `/usr/share/logstash/pipeline`, as either a single file or collection of files, typically defined as a `ConfigMap` or series of `ConfigMaps` - note that
a single Kubernetes `ConfigMap` has a size limit of 1MB.
This example contains a simple pipeline definition, with the inputs and outputs split into separate configuration files:
[source,yaml]
--
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-pipeline <1>
labels:
app: logstash-demo
data:
logstash-input.conf: | <2>
input {
beats {
port => "5044"
}
}
logstash-output.conf: | <3>
output {
elasticsearch {
hosts => ["https://demo-es-http:9200"]
}
}
--
<1> Name of `ConfigMap` to be referenced in `Deployment`.
<2> Creates a `ConfigMap` representing the inputs for a pipeline.
<3> Creates a `CongigMap` representing the outputs for a pipeline.
Next, define your `Volume` in your `Deployment` template:
[source,yaml]
--
volumes:
- name: logstash-pipeline
configMap:
name: logstash-pipeline
--
and mount the volume in your container:
[source,yaml]
--
volumeMounts:
- name: logstash-pipeline
mountPath: /usr/share/logstash/pipeline
--
[float]
[[qs-multiple-pipeline-config]]
==== Multiple pipelines
{ls} uses the `pipelines.yml` file to define {logstash-ref}/multiple-pipelines.html[multiple pipelines].
{ls} in {k8s} requires a `ConfigMap` to represent the content that would otherwise be in `pipelines.yml`.
You can create pipeline configurations inline, or in separate `configMap` files or folders.
*Example: Pipelines.yml `ConfigMap` with an inline pipeline definition*
[source,yaml]
--
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-pipeline-yaml <1>
labels:
app: logstash-demo
data:
pipelines.yml: | <2>
- pipeline.id: test <3>
pipeline.workers: 1
pipeline.batch.size: 1
config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
- pipeline.id: pipeline2 <4>
pipeline.workers: 8
path.config: "/usr/share/logstash/pipeline2"
--
<1> Name of `ConfigMap` to be referenced in `Deployment`.
<2> Defines a `pipelines.yml` `ConfigMap`.
<3> Defines a pipeline inside the `pipelines.yml`.
<4> Defines a pipeline, and a location where the pipeline definitions are stored. See below for these pipeline definitions.
*Example: Pipelines defined in separate files*
[source,yaml]
--
apiVersion: v1
kind: ConfigMap
metadata:
name: pipeline2
labels:
app: logstash-demo
data:
logstash-input.conf: |
input {
beats {
port => "5044"
}
}
logstash-output.conf: |
output {
elasticsearch {
hosts => ["https://demo-es-http:9200"]
index => "kube-apiserver-%{+YYYY.MM.dd}"
cacert => "/usr/share/logstash/config/es_ca.crt"
user => 'elastic'
password => '${ELASTICSEARCH_PASSWORD}'
}
}
--
[float]
[[expose-pipelines]]
===== Make pipelines available to Logstash
Create the volume(s) in your `Deployment`/`StatefulSet`
[source,yaml]
--
volumes:
- name: logstash-pipelines-yaml
configMap:
name: logstash-pipelines-yaml
- name: pipeline2
configMap:
name: pipeline2
--
and mount the volume(s) in your container spec
[source,yaml]
--
#
volumeMounts:
- name: pipeline2
mountPath: /usr/share/logstash/pipeline2
- name: logstash-pipelines-yaml
mountPath: /usr/share/logstash/config/pipelines.yml
subPath: pipelines.yml
--
[float]
[[qs-settings]]
==== Settings configuration
[float]
[[qs-logstash-yaml]]
===== The logstash.yml file
Unless you specify a configuration file, default values for the {logstash-ref}/logstash-settings-file.html[logstash.yml file] are used.
To override the default values, create a `ConfigMap` with the settings that you want to override:
[source,yaml]
--
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
labels:
app: logstash-demo
data:
logstash.yml: |
api.http.host: "0.0.0.0"
log.level: info
pipeline.workers: 2
--
In your `Deployment`/`StatefulSet`, create the `Volume`:
[source,yaml]
--
volumes:
- name: logstash-config
configMap:
name: logstash-config
--
Create the `volumeMount` in the `container`:
[source,yaml]
--
volumeMounts:
- name: logstash-config
mountPath: /usr/share/logstash/config/logstash.yml
subPath: logstash.yml
--
[float]
[[qs-jvm-options]]
==== JVM options
JVM settings are best set using environment variables to override the default settings in `jvm.options`.
This approach ensures that the expected settings from `jvm.options` are set, and only those options that explicitly need to be overridden are.
The JVM settings should be added in the `LS_JAVA_OPTS` environment variable in the container definition of your `Deployment`/`StatefulSet`:
[source,yaml]
--
spec:
containers:
- name: logstash
env:
- name: LS_JAVA_OPTS
value: "-Xmx2g -Xms2g"
--
[float]
[[qs-logging]]
==== Logging configuration
By default, we use the `log4j2.properties` from the logstash docker image, that will log to `stdout` only. To change the log level, to use debug logging, use the `log.level` option in <<qs-logstash-yaml, logstash.yml>>
NOTE: You can apply temporary logging changes using the {logstash-ref}/logging.html#_logging_apis[Logging APIs].
If you require broader changes that persist across container restarts, you need to create a *full* and correct `log4j2.properties` file, and ensure that it is visible to the {ls} container.
This example uses a `configMap` and the base `log4j2.properties` file from the Docker container, adding debug logging for elasticsearch output plugins:
[source,yaml]
--
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-log4j
labels:
app: logstash-demo
data:
log4j2.properties: |
status = error
name = LogstashPropertiesConfig
appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true
rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
logger.elasticsearchoutput.name = logstash.outputs.elasticsearch
logger.elasticsearchoutput.level = debug
--
In your `Deployment`/`StatefulSet`, create the `Volume`:
[source,yaml]
--
volumes:
- name: logstash-log4j
configMap:
name: logstash-log4j
--
Create the `volumeMount` in the `container`:
[source,yaml]
--
volumeMounts:
- name: logstash-log4j
mountPath: /usr/share/logstash/config/log4j.properties
subPath: log4j.properties
--

View file

@ -1,384 +0,0 @@
[[ls-k8s-quick-start]]
== Quick start
WARNING: This documentation is still in development and may be changed or removed in a future release.
This guide walks you through setting up {ls} to deliver {k8s} logs to {es}.
Tasks include setting up a {k8s} cluster that contains {es} and {kib} to store and visualize the logs.
The logs are monitored by {filebeat}, processed through a {ls} pipeline, and then delivered to the {es} pod in the {k8s} cluster.
We also walk you through configuring local stack monitoring using a {metricbeat} pod to monitor {ls}.
This section includes the following topics:
* <<qs-prerequisites>>
* <<qs-set-up>>
* <<qs-generate-certificate>>
* <<qs-create-elastic-stack>>
* <<qs-view-monitoring-data>>
* <<qs-tidy-up>>
* <<qs-external-elasticsearch>>
* <<qs-learn-more>>
[float]
[[qs-prerequisites]]
=== Prerequisites
You'll need:
* *A running {k8s} cluster.* For local/single node testing we recommend using https://minikube.sigs.k8s.io[Minikube], which allows you to easily run a single node {k8s} cluster on your system.
Check the minikube https://minikube.sigs.k8s.io/docs/start/[Get Started!] section for install and set up instructions.
* *A link:https://github.com/elastic/logstash/blob/main/docsk8s/sample-files/logstash-k8s-qs.zip[small zip file] of config files.* Download and expand this archive into an empty directory on your local system. The files are described in <<sample-configuration-files,Sample configuration files>>.
[float]
[[qs-set-up]]
=== Prepare your environment
[discrete]
[[qs-crd]]
==== Install Elastic CRDs
To simplify installing other elements of the {stack}, we will install Elastic custom resource definition (CRD) files and the `elastic-operator` custom controller, used to manage the Elastic resources in your cluster:
[source,sh]
--
kubectl create -f https://download.elastic.co/downloads/eck/2.4.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/2.4.0/operator.yaml
--
NOTE: The Elastic CRDs and ECK operator can also be set up using Elastic Helm charts, available at link:https://helm.elastic.co[https://helm.elastic.co].
Check the Kubernetes pods status to confirm that the `elastic-operator` pod is running:
[source,sh]
--
kubectl get pods
--
[source,sh]
--
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 4 (12m ago) 13d
--
[float]
[[qs-generate-certificate]]
==== Generate certificate files and create Kubernetes Secret definition
To help you enable secure communication between the {stack} components in your {k8s} cluster, we have provided a sample script to generate the CA certificate files. Details about these files are in <<sample-configuration-files,Sample configuration files>>.
[source,sh]
--
./cert/generate_cert.sh
--
.**Expand to view output**
[%collapsible]
====
[source,sh]
--
Generating RSA private key, 2048 bit long modulus
.......................+++
...........................................................................+++
e is 65537 (0x10001)
Generating RSA private key, 2048 bit long modulus
..............................................+++
.............................................+++
e is 65537 (0x10001)
Signature ok
subject=/C=EU/ST=NA/O=Elastic/CN=ServerHostName
Getting CA Private Key
Generating RSA private key, 2048 bit long modulus
............+++
.......................................................................................................................................+++
e is 65537 (0x10001)
Signature ok
subject=/C=EU/ST=NA/O=Elastic/CN=ClientName
Getting CA Private Key
--
Your `logstash-k8s-qs/cert` folder should now contain a set of certificate files, including `client` certificates for {filebeat} and {metricbeat}, and `server` certificates for {ls}.
The parent `logstash-k8s-qs` directory also has a new `001-secret.yaml` resources file that stores a hash of the client and server certificates.
image::./images/gs-cert-files.png[generated CA certificate files]
====
[float]
[[qs-create-kubernetes-cluster]]
=== Create the {k8s} cluster
As part of this configuration, we will set up {stack} components and {ls}.
[float]
[[qs-create-elastic-stack]]
==== Create the {stack} components
Now that your environment and certificates are set up, it's time to add the {stack}. We will create:
* {es} - you know, for search
* {kib} - for data visualization
* {filebeat} - to monitor container logs
* {metricbeat} - to monitor {ls} and send stack monitoring data to the monitoring cluster.
* Secret definitions containing the keys and certificates we generated earlier.
Run this command to deploy the example using the sample resources provided:
[source,sh]
--
kubectl apply -f "000-elasticsearch.yaml,001-secret.yaml,005-filebeat.yaml,006-metricbeat.yaml,007-kibana.yaml"
--
The {stack} resources are created:
[source,sh]
--
elasticsearch.elasticsearch.k8s.elastic.co/demo created
secret/logstash-beats-tls created
beat.beat.k8s.elastic.co/demo created
beat.beat.k8s.elastic.co/demo configured
kibana.kibana.k8s.elastic.co/demo created
--
[source,sh]
--
kubectl get pods
--
The pods are starting up. You may need to wait a minute or two for all of them to be ready.
[source,sh]
--
NAME READY STATUS RESTARTS AGE
demo-beat-filebeat-7f4d97f69f-qkkbl 1/1 Running 0 42s
demo-beat-metricbeat-59f4b68cc7-9zrrn 1/1 Running 0 39s
demo-es-default-0 1/1 Running 0 41s
demo-kb-d7f585494-vbf6s 1/1 Running 0 39s
elastic-operator-0 1/1 Running 4 (164m ago) 13d
--
[float]
[[qs-set-up-logstash]]
==== Set up {ls}
We have our {stack} set up. Let's set up {ls}.
We typically use <<qs-configmap, ConfigMaps>> to set up {ls} configurations and pipeline definitions in {k8s}.
Check out <<ls-k8s-configuration-files, Logstash Configuration files in Kubernetes>> for more details.
Then, we'll create the <<qs-deployment, deployment definition>> for {ls}, including memory, CPU resources, the container ports, timeout settings, and similar, and the <<qs-service, Service definition>>, opening up ports on the logstash pods to the internal metricbeat (for stack monitoring) and filebeat in this instance
Let's create a `Deployment`.
Some {ls} configurations--such as those using certain classes of plugins or a persistent queue--should be configured using a `StatefulSet`.
[source,sh]
--
kubectl apply -f "001-configmap.yaml,002-deployment.yaml,003-service.yaml"
--
We should now see the Logstash pod up and running:
[source,sh]
--
kubectl get pods
--
The pods are starting up. You may need to wait a minute or two for all of them to be ready.
[source,sh]
--
NAME READY STATUS RESTARTS AGE
demo-beat-filebeat-7f4d97f69f-qkkbl 1/1 Running 0 42s
demo-beat-metricbeat-59f4b68cc7-9zrrn 1/1 Running 0 39s
demo-es-default-0 1/1 Running 0 41s
demo-kb-d7f585494-vbf6s 1/1 Running 0 39s
elastic-operator-0 1/1 Running 4 (164m ago) 13d
logstash-7974b9ccb9-jd5xl 1/1 Running 0 42s
--
[float]
[[qs-view-data]]
=== View your data
First, enable port forwarding for the {kib} service on port `5601`. Open a second shell window and run:
[source,sh]
--
kubectl port-forward service/demo-kb-http 5601
--
Then, open up a web browser at address `https://localhost:5601`. Depending on your browser you may need to accept the site certificate.
Log in to {kib} using the `elastic` username and password. To obtain the password, run:
[source,sh]
--
kubectl get secret demo-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
--
We are sending two types of data to {es}: [k8s} logs and stack monitoring data.
[float]
[[qs-view-k8s-logs]]
==== View your {k8s} logs
The {filebeat} instance attached to this cluster sends log entries from the `kube-api-server` logs to an index specified in the {ls} configuration.
To verify that this data is indeed being sent to {es}, open the {kib} main menu and select **Management > Dev Tools**, and perform this query:
[source,http request]
--
GET kube-apiserver-*/_count
--
The count rises as events are discovered from the apiserver logs.
[source,json]
--
{
"count": 89,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
}
}
--
[float]
[[qs-view-monitoring-data]]
==== View the stack monitoring data
Open the {kib} main menu and select **Management**, then **Stack Monitoring**.
Select the {ls} **Overview**, and under the **Nodes** tab select the link for the {ls} node.
image::./images/gs-logstash-node-metrics.png[{ls} metrics data in {kib}]
That's it! The Logstash pod metrics data is flowing through {ls} into {es} and {kib}. You can monitor the JVM Heap, CPU Utilization, and System Load data as it updates in real time.
[float]
[[qs-tidy-up]]
=== Tidy up
After finishing with this demo, you can run the following command to remove all of the created resources:
[source,sh]
--
kubectl delete service,pods,deployment,configmap,secret,beat,elasticsearch,kibana -l app=logstash-demo
--
[float]
[[qs-next-steps]]
=== Next steps
[float]
[[qs-external-elasticsearch]]
==== Send logs to an external {es} instance
You aren't limited to sending data to an {es} cluster that is located in the same {k8s} cluster as {ls}.
You can send data to Elastic cloud, for example.
[float]
[[qs-send-to-elastic-cloud]]
===== Sending to Elastic Cloud
We need only the {ls}-based components to connect to Elastic cloud.
You won't need to include the {es} or {kib} components from the earlier examples.
Let's amend the `Deployment`/`StatefulSet` to set `CLOUD_ID` and `API_KEY` environment variables with the appropriate value for your cloud instance.
One way to do this is to create a link:https://kubernetes.io/docs/concepts/configuration/secret/[secret] to store `CLOUD_ID` and `API_KEY`:
[source,yaml]
--
apiVersion: v1
kind: Secret
metadata:
name: ess_secret
type: Opaque
data:
cloud_id: PENMT1VEX0lEPg== <1>
password: PEFQSV9LRVk+
--
<1> base64 representation of `cloud_id` and `api_key` for your elastic cloud instance - created using:
+
[source,sh]
--
echo -n '<CLOUD_ID>' | base64
echo -n '<API_KEY>' | base64
--
Mount the secrets in the `Deployment`/`StatefulSet`:
[source,yaml]
--
env:
- name: CLOUD_ID
valueFrom:
secretKeyRef:
name: ess_secret
key: cloud_id
- name: API_KEY
valueFrom:
secretKeyRef:
name: ess_secret
key: api_key
--
Let's amend the pipeline definition `ConfigMap` to change the destination of the {es} output to the cloud instance.
[source,yaml]
--
output {
elasticsearch {
cloud_id => "CLOUD_ID"
api_key => "API_KEY"
ssl => true
}
--
[float]
[[qs-scale-logstash]]
==== Scale Logstash with Horizontal Pod Autoscaler
For a simple Logstash setup without <<ls-k8s-persistent-storage, persistent storage>> or <<ls-k8s-design-for-plugins, plugins that require the storing of local state>>, we can introduce a simple <<qs-autoscaler, horizontal pod autoscaler>>.
Apply the autoscaler:
[source,bash]
--
kubectl apply -f "004-hpa.yaml"
--
NOTE: If you are using more than one {ls} pod, use the https://www.elastic.co/guide/en/beats/metricbeat/current/configuration-autodiscover.html#_kubernetes[beats autodiscover] features to monitor them. Otherwise, only one {ls} pod is monitored.
See the <<monitor-with-ECK,stack monitoring with ECK>> docs for details on how to use autodiscover with {metricbeat} and {ls}.
[float]
[[qs-learn-more]]
==== Learn more
Now that you're familiar with how to get a {ls} monitoring setup running in your Kubernetes environment, here are a few suggested next steps:
* <<ls-k8s-design-for-plugins>>
* <<ls-k8s-sizing>>
* <<ls-k8s-secure>>
* <<ls-k8s-stack-monitoring>>
As well, we have a variety of <<ls-k8s-recipes,recipes>> that you can use as templates to configure an environment to match your specific use case.

View file

@ -1,352 +0,0 @@
[[sample-configuration-files]]
=== Sample configuration files
WARNING: This documentation is still in development and may be changed or removed in a future release.
These configuration files are used in the <<ls-k8s-quick-start,{ls} and Kubernetes quick start>>. You can use them as templates when you configure Logstash together with the rest of the Elastic Stack in a Kubernetes environment.
You can download the files together as a link:https://github.com/elastic/logstash/blob/main/docsk8s/sample-files/logstash-k8s-qs.zip[zip archive].
[[qs-setup-files]]
==== Setup files
These files are used to create certificates and keys required for secure communication between {beats} and {ls}.
They are included for illustration purposes only.
For production environments, supply your own keys and certificates as appropriate.
`cert/generate_cert.sh`::
Generates the `ca.crt`, `client.key`, `client.crt`, `server.key`, and `server.pkcs8.key` used to establish a secure connection between Filebeat and Logstash. The certificates and keys are all contained in the `001-secret.yaml` file that is generated when you run `generate_cert.sh`.
`cert/openssl.conf`::
The OpenSSL Generated Server Certificate used for TLS communication between resources.
This config file creates a secrets file `001-secret.yaml`.
We will install the secrets file as we set up the {stack}.
[[qs-logstash-configuration-files]]
==== Logstash configuration files
[[qs-configmap]]
`001-configmap.yaml`::
This file contains the Logstash settings and pipeline configuration:
+
[source,yaml]
--
---
# ConfigMap for logstash pipeline definition
data:
logstash.conf: | <1>
input {
beats {
port => "5044"
ssl_enabled => true
ssl_certificate_authorities => ["/usr/share/logstash/config/ca.crt"]
ssl_certificate => "/usr/share/logstash/config/server.crt"
ssl_key => "/usr/share/logstash/config/server.pkcs8.key"
ssl_client_authentication => "required"
}
}
output {
elasticsearch {
hosts => ["https://demo-es-http:9200"]
index => "kube-apiserver-%{+YYYY.MM.dd}"
cacert => "/usr/share/logstash/config/es_ca.crt"
user => 'elastic'
password => '${ELASTICSEARCH_PASSWORD}'
}
}
---
# ConfigMap for logstash.yml definition
data:
logstash.yml: | <2>
api.http.host: "0.0.0.0"
--
<1> Definition of {ls} configuration file.
We will refer to this definition later in the deployment file, where we will define volumes.
<2> Definition of {logstash-ref}/logstash-settings-file.html[logstash.yml] file
Define each key/value pair to override defaults. We will refer to this definition later in the deployment file.
[[qs-secrets]]
`001-secrets.yaml`::
This secrets file includes certificates and key files required for secure communication between {ls} and the rest of the {stack}. This example was generated by the supplied script, but for your own configuration it should contain the base64 encoded representations of your own certificates and keys.
+
You can generate this file for your own certs and keys by using the `kubectl create secret generic` command:
+
[source,sh]
--
kubectl create secret generic logstash-beats-tls --from-file=ca.crt --from-file=client.crt --from-file=client.key --from-file=server.crt --from-file=server.pkcs8.key --dry-run=client -o yaml | kubectl label -f- --dry-run=client -o yaml --local app=logstash-demo > ../001-secret.yaml
--
+
The command generates a secrets file that looks resembles this.
+
[source,yaml]
--
apiVersion: v1
data:
ca.crt: <BASE64 representation of ca cert, used in beats client and logstash beats input>
client.crt: <BASE64 representation of beats client cert>
client.key: <BASE64 representation of beats client key>
server.crt: <BASE64 representation of server certificate, used in beats input>
server.pkcs8.key: <BASE64 representation of PKCS8 server key, used in beats input>
kind: Secret
metadata:
creationTimestamp: null
labels:
app: logstash-demo
name: logstash-beats-tls
--
[[qs-deployment]]
`002-deployment.yaml`::
Contains the configuration definition for {ls}.
+
[source,yaml]
--
spec:
replicas: 1
selector:
matchLabels:
app: logstash-demo
template:
metadata:
labels:
app: logstash-demo
spec:
containers:
- name: logstash
securityContext:
runAsNonRoot: true
runAsUser: 1000
image: {docker-image} <1>
env:
- name: LS_JAVA_OPTS <2>
value: "-Xmx1g -Xms1g"
- name: ELASTICSEARCH_PASSWORD <11>
valueFrom:
secretKeyRef:
name: demo-es-elastic-user
key: elastic
resources:
limits: <3>
cpu: 2000m
memory: 2Gi
requests:
cpu: 1000m
memory: 2Gi
ports: <4>
- containerPort: 9600
name: stats
- containerPort: 5044
name: beats
livenessProbe: <5>
httpGet:
path: /
port: 9600
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe: <6>
httpGet:
path: /
port: 9600
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts: <7>
- name: logstash-pipeline
mountPath: /usr/share/logstash/pipeline
- name: logstash-config <8>
mountPath: /usr/share/logstash/config/logstash.yml
subPath: logstash.yml
- name: es-certs <9>
mountPath: /usr/share/logstash/config/es_ca.crt
subPath: ca.crt
- name: logstash-beats-tls
mountPath: /usr/share/logstash/config/ca.crt
subPath: ca.crt
- name: logstash-beats-tls
mountPath: /usr/share/logstash/config/server.pkcs8.key
subPath: server.pkcs8.key
- name: logstash-beats-tls
mountPath: /usr/share/logstash/config/server.crt
subPath: server.crt
volumes:
- name: logstash-pipeline <7>
configMap:
name: logstash-pipeline
- name: logstash-config <8>
configMap:
name: logstash-config
- name: es-certs <9>
secret:
secretName: demo-es-http-certs-public
- name: logstash-beats-tls <10>
secret:
secretName: logstash-beats-tls
- name: es-user <11>
secret:
secretName: demo-es-elastic-user
--
<1> {ls} {logstash-ref}/docker.html[docker image]
<2> Set non-default JVM settings, such as memory allocation, here in the `LS_JAVA_OPTS` env variable to avoid the need to add a whole `jvm.options` file in a `ConfigMap`
<3> Resource/memory limits for the pod. Refer to Kubernetes documentation to set resources appropriately for each pod. Ensure that each pod has sufficient memory to handle the
heap specified in <2>, allowing enough memory to deal with direct memory. Check out {logstash-ref}/jvm-settings.html#heap-size[Logstash JVM settings] for details.
<4> Expose the necessary ports on the container. Here we are exposing port `5044` for the beats input, and `9600` for the metricbeat instance to query the logstash metrics API for stack monitoring purposes.
<5> Liveness probe to determine whether Logstash is running. Here we point to the Logstash Metrics API, an HTTP based API that will be ready shortly after logstash starts. Note that the endpoint shows no indication that Logstash is active, only that the API is available.
<6> Readiness probe to determine whether Logstash is running. Here we point to the {ls} Metrics API, an HTTP based API that will be ready shortly after {ls} starts. Note that the endpoint shows no indication that {ls} is active, only that the API is available.
<7> The pipeline configuration that we created in <<qs-configmap,the ConfigMap declaration>> needs a `volume` and a `volumeMount`. The `volume` refers to the created <<qs-configmap,config map>> and the `volumeMount` refers to the created `volume` and mounts in a location that logstash will read. Unless a separate `pipeline.yml` file is created by a further `ConfigMap` definition, the expected location of pipeline configurations is `/usr/share/logstash/pipelines` and the `mountPath` should be set accordingly.
<8> Name of the <<qs-configmap,Logstash configuration>> we created earlier. This file should contain key/value pairs intended to override the default values in {logstash-ref}/logstash-settings-file.html[logstash.yml], using the `flat key syntax` described in that document. To setup, this needs a `volume` and a `volumeMount`. The `volume` refers to the created <<qs-configmap,config map>> and the `volumeMount` refers to the created `volume` and mounts in a location that {ls} will read. The `mountPath` should be set to ` `/usr/share/logstash/logstash.yml`.
<9> `Volume` and `VolumeMount` definitions for certificates to use with Elasticsearch. This contains the CA certificate to output data to {es}. Refer to link:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-tls-certificates.html[TLS certificates] in the {eck} Guide for details.
<10> `Volume` and `VolumeMount` definitions for certificates to use with Beats.
<11> The {es} password is taken from `demo-es-elastic-user` and passed to the Logstash pipeline as an `ELASTICSEARCH_PASSWORD` environment variable. Refer to link:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-request-elasticsearch-endpoint.html[Access the {es} endpoint] in the {eck} Guide for details.
[[qs-service]]
`003-service.yaml`::
+
This file contains the Service definition, opening up ports on the logstash pods to the internal metricbeat (for stack monitoring) and filebeat in this instance.
[source,yaml]
--
spec:
type: ClusterIP
ports:
- port: 9600 <1>
name: "stats"
protocol: TCP
targetPort: 9600 <1>
- port: 5044 <2>
name: "beats"
protocol: TCP
targetPort: 5044 <2>
selector:
app: logstash-demo
--
<1> Opens port `9600` for {metricbeat} to connect to the {ls} metrics API.
<2> Opens port `5044` for {filebeat} to connect to the {beats} input defined in the <<qs-configmap,ConfigMap>>.
[[qs-additional-logstash-configuration]]
[[qs-autoscaler]]
`004-hpa.yml`::
+
This file sets up a horizontal pod autoscaler to scale {ls} instances up and down, depending on the load on the {ls} instance(s). See link:https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[kubernetes autoscaler docs] for more details.
[source,yaml]
--
apiVersion: autoscaling/v2 <1>
kind: HorizontalPodAutoscaler
metadata:
name: logstash
labels:
app: logstash-demo
spec:
minReplicas: 1 <2>
maxReplicas: 2
behavior:
scaleUp:
stabilizationWindowSeconds: 60 <3>
scaleDown:
stabilizationWindowSeconds: 180
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: logstash <4>
metrics:
- type: Resource <5>
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
--
<1> Requires {k8s} `1.23` and higher.
<2> Specifies the maximum and minimum number of Logstashes desired for the cluster.
<3> Specifies stabilization windows to avoid rapidly scaling nodes up and down unnecessarily.
<4> `Deployment` created <<qs-deployment, earlier>>
[[qs-stack-monitoring-files]]
`006-metricbeat.yaml`::
Enables the {metricbeat} {ls} module and sets it to collect metrics data from `logstash:9600`:
+
[source,yaml]
--
- module: logstash <1>
metricsets:
- node
- node_stats
period: 10s
hosts:
- logstash:9600
xpack.enabled: true
--
<1> Definition for logstash module, defined under `spec.config.metricbeat.modules`
[[qs-filebeat-configuration]]
`005-filebeat.yaml`::
This file includes the configuration required for a beat to communicate with {ls}.
It includes the {ls} output definition, and makes the generated certs and key files from <<qs-secrets, the secrets file>> available to the beat to enable secure communication with {ls}.
+
[source,yaml]
--
volumes: <1>
- name: logstash-beats-tls
secret:
secretName: logstash-beats-tls
--
<1> Volume definition for certs/keys defined under `deployment.podTemplate.spec`.
+
[source,yaml]
--
volumeMounts: <1>
- name: logstash-beats-tls
mountPath: /usr/share/filebeat/ca.crt
subPath: ca.crt
- name: logstash-beats-tls
mountPath: /usr/share/filebeat/client.key
subPath: client.key
- name: logstash-beats-tls
mountPath: /usr/share/filebeat/client.crt
subPath: client.crt
--
<1> Volume mount definition for certs/keys defined under `deployment.podTemplate.spec.containers`.
+
[source,yaml]
--
output.logstash: <1>
hosts:
- "logstash:5044"
ssl.certificate_authorities: ["/usr/share/filebeat/ca.crt"]
ssl.certificate: "/usr/share/filebeat/client.crt"
ssl.key: "/usr/share/filebeat/client.key"
--
<1> Logstash output definition defined under `spec.config`.
[[qs-stack-configuration-files]]
`000-elasticsearch.yaml`::
Configures a single {es} instance to receive output data from {ls}.
`007-kibana.yaml`::
Configures a single {kib} instance to visualize the logs and metrics data.

View file

@ -1,46 +0,0 @@
[[ls-k8s-design-for-plugins]]
=== Design your installation based on plugin usage
WARNING: This documentation is still in development and may be changed or removed in a future release.
Our recommandations for your {ls} Kubernetes installation vary depending on the types of plugins that you plan to use, and their respective requirements.
[[designing-pull-based]]
==== Pull-based plugins
Designing recommendations for pull-based plugins depend on whether or not the plugins support autoscaling.
**Autoscaling**
These plugins can autoscale by tracking work done externally to {ls}. Examples include Kafka, Azure Event Hubs in certain configurations, and others.
Recipe link.
**Non-autoscaling**
Description.
Recipe link.
[[designing-push-based]]
==== Push-based plugins
Designing recommendations for push-based plugins depend on whether or not the plugins support autoscaling.
**Autoscaling**
These plugins support autoscaling. Examples include Beats, HTTP, and others.
Recipe link.
**Non-autoscaling**
These plugins do not support autoscaling, either because they have a dependency on `sincedb`, or because ...
Recipe link.
**Other resources required**
Certain plugins require additional resources to be available in order for them to run. Examples include the JDBC and JMS plugins, which require JARs to be available on on `classpath`.
Recipe link.

View file

@ -1,245 +0,0 @@
[[ls-k8s-persistent-storage]]
=== Stateful {ls} for persistent storage
WARNING: This documentation is still in development and may be changed or removed in a future release.
You need {ls} to persist data to disk for certain use cases.
{ls} offers some persistent storage options to help:
* <<persistent-storage-pq,Persistent queue (PQ)>> to absorb bursts of events
* <<persistent-storage-dlq,Dead letter queue (DLQ)>> to accept corrupted events that cannot be processed
* <<persistent-storage-plugins,Persistent storage options in some {ls} plugins>>
For all of these cases, we need to ensure that we can preserve state.
Remember that the {k8s} scheduler can shutdown pods at anytime and spawn the process to another node. To preserve state, we define our {ls} deployment using `StatefulSet` rather than `Deployment`.
[[persistent-storage-statefulset]]
==== Set up StatefulSet
[source,yaml]
--
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash
labels:
app: logstash-demo
spec:
replicas: 1
selector:
matchLabels:
app: logstash-demo
serviceName: logstash
template:
metadata:
labels:
app: logstash-demo
spec:
containers:
- name: logstash
image: "docker.elastic.co/logstash/logstash:{version}"
env:
- name: LS_JAVA_OPTS
value: "-Xmx1g -Xms1g"
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 1000m
memory: 2Gi
ports:
- containerPort: 9600
name: stats
livenessProbe:
httpGet:
path: /
port: 9600
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 9600
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: logstash-data <2>
mountPath: /usr/share/logstash/data
- name: logstash-pipeline
mountPath: /usr/share/logstash/pipeline
- name: logstash-config
mountPath: /usr/share/logstash/config/logstash.yml
subPath: logstash.yml
- name: logstash-config
mountPath: /usr/share/logstash/config/pipelines.yml
subPath: pipelines.yml
volumes:
- name: logstash-pipeline
configMap:
name: logstash-pipeline
- name: logstash-config
configMap:
name: logstash-config
volumeClaimTemplates: <1>
- metadata:
name: logstash-data
labels:
app: logstash-demo
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
--
Everything is similar to `Deployment`, except the usage of `VolumeClaimTemplates`.
<1> Request 2G of persistent storage from `PersistentVolumes`.
<2> Mount the storage to `/usr/share/logstash/data`. This is the default path of {ls} and its plugins for any persistence needs.
NOTE: The feature of persistent link:https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/[volume expansion] depends on the storage class. Check with your cloud provider.
[[persistent-storage-pq]]
==== Persistent queue (PQ)
You can configure persistent queues globally across all pipelines in `logstash.yml`, with settings for individual pipelines in `pipelines.yml`. Note that individual settings in `pipelines.yml` override those in `logstash.yml`. Queue data store is set to `/usr/share/logstash/data/queue` by default.
To enable {logstash-ref}/persistent-queues.html[PQ] for every pipeline, specify options in `logstash.yml`.
[source,yaml]
--
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
data:
logstash.yml: |
api.http.host: "0.0.0.0"
queue.type: persisted
queue.max_bytes: 1024mb
...
--
To specify options per pipeline, set in `pipelines.yml`.
[source,yaml]
--
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
data:
logstash.yml: |
api.http.host: "0.0.0.0"
pipelines.yml: |
- pipeline.id: fast_ingestion
path.config: "/usr/share/logstash/pipeline/fast.conf"
queue.type: persisted
queue.max_bytes: 1024mb
- pipeline.id: slow_ingestion
path.config: "/usr/share/logstash/pipeline/slow.conf"
queue.type: persisted
queue.max_bytes: 2048mb
--
[[persistent-storage-dlq]]
==== Dead letter queue (DLQ)
To enable {logstash-ref}/dead-letter-queues.html[dead letter queue], specify options in `logstash.yml`. The default path of DLQ is `/usr/share/logstash/data/dead_letter_queue`.
[source,yaml]
--
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
data:
logstash.yml: |
api.http.host: "0.0.0.0"
dead_letter_queue.enable: true <1>
pipelines.yml: |
- pipeline.id: main <2>
path.config: "/usr/share/logstash/pipeline/main.conf"
- pipeline.id: dlq <3>
path.config: "/usr/share/logstash/pipeline/dlq.conf"
--
<1> Enable DLQ for all pipelines that use {logstash-ref}/plugins-outputs-elasticsearch.html[elasticsearch output plugin]
<2> The `main` pipeline sends failed events to DLQ. Checkout the pipeline definition in the next section.
<3> The `dlq` pipeline should consume events from the DLQ, fix errors and re-send events to {es}. Checkout the pipeline definition in the next section.
[source,yaml]
--
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-pipeline
data:
main.conf: | <1>
input {
exec {
command => "uptime"
interval => 5
}
}
output {
elasticsearch {
hosts => ["https://hostname.cloud.es.io:9200"]
index => "uptime-%{+YYYY.MM.dd}"
user => 'elastic'
password => 'changeme'
}
}
dlq.conf: | <2>
input {
dead_letter_queue {
path => "/usr/share/logstash/data/dead_letter_queue"
commit_offsets => true
pipeline_id => "main"
}
}
filter {
# Do your fix here
}
output {
elasticsearch {
hosts => ["https://hostname.cloud.es.io:9200"]
index => "dlq-%{+YYYY.MM.dd}"
user => 'elastic'
password => 'changeme'
}
}
--
<1> An example pipeline that tries to send events to a closed index in {es}. To test this functionality manually, use {ref}/indices-close.html[_close] API to close the index.
<2> This pipeline use {logstash-ref}/plugins-inputs-dead_letter_queue.html[dead_letter_queue input plugin] to consume DLQ events. This example sends to a different index, but you can add filter plugins to fix other types of error causing fail insertion, such as mapping errors.
[[persistent-storage-plugins]]
==== Plugins that require local storage to track work done
Many Logstash plugins are stateful, and need to use persistent storage to track the current state of the work that they are doing.
Logstash plugins that are stateful will typically have some kind of `path` that needs to be configured, such as `sincedb_path` or `last_run_metadata_path`
Here is the list of popular plugins that will require persistent storage, and the use of a `StatefulSet` with `VolumeClaimTemplates`, checkout <<persistent-storage-statefulset>>.
[cols="<,<",options="header",]
|=======================================================================
|Plugin |Settings
|logstash-codec-netflow| {logstash-ref}/plugins-codecs-netflow.html#plugins-codecs-netflow-cache_save_path[cache_save_path]
|logstash-inputs-couchdb_changes| {logstash-ref}/plugins-inputs-couchdb_changes.html#plugins-inputs-couchdb_changes-sequence_path[sequence_path]
|logstash-input-dead_letter_queue| {logstash-ref}/plugins-inputs-dead_letter_queue.html#plugins-inputs-dead_letter_queue-sincedb_path[sincedb_path]
|logstash-input-file| {logstash-ref}/plugins-inputs-file.html#plugins-inputs-file-file_completed_log_path[file_completed_log_path], {logstash-ref}/plugins-inputs-file.html#plugins-inputs-file-sincedb_path[sincedb_path]
|logstash-input-google_cloud_storage| {logstash-ref}/plugins-inputs-google_cloud_storage.html#plugins-inputs-google_cloud_storage-processed_db_path[processed_db_path]
|logstash-input-imap| {logstash-ref}/plugins-inputs-imap.html#plugins-inputs-imap-sincedb_path[sincedb_path]
|logstash-input-jdbc| {logstash-ref}/plugins-inputs-jdbc.html#plugins-inputs-jdbc-last_run_metadata_path[last_run_metadata_path]
|logstash-input-s3| {logstash-ref}/plugins-inputs-s3.html#plugins-inputs-s3-sincedb_path[sincedb_path]
|logstash-filters-aggregate| {logstash-ref}/plugins-filters-aggregate.html#plugins-filters-aggregate-aggregate_maps_path[aggregate_maps_path]
|=======================================================================

View file

@ -1,245 +0,0 @@
[[ls-k8s-secure]]
=== Secure your environment
WARNING: This documentation is still in development and may be changed or removed in a future release.
In order to prepare your environment to be production ready, you'll need to set up secure communication between each of your Elastic resources.
[[security-communication]]
==== Secure communication
[[security-tls]]
===== Setting up TLS
Transport layer security (TLS) helps ensure safe communication between the {stack} components running in {k8s}.
Let's take {filebeat} and {ls} TLS mutual verification as an link:{filebeat-ref}/configuring-ssl-logstash.html[example]. {ls} serves as the server side, while {filebeat} is the client.
Create a `Secret` containing server and client SSL keys:
[source,sh]
--
kubectl create secret generic logstash-beats-tls --from-file=ca.crt --from-file=client.crt --from-file=client.key --from-file=server.crt --from-file=server.pkcs8.key
--
On {ls}, configure the server certificates to the pipeline:
[source,ruby]
--
input {
beats {
port => "5044"
ssl_enabled => true
ssl_certificate_authorities => ["/usr/share/logstash/config/ca.crt"]
ssl_certificate => "/usr/share/logstash/config/server.crt"
ssl_key => "/usr/share/logstash/config/server.pkcs8.key"
ssl_client_authentication => "required"
}
}
--
Mount the keys we just created to {ls} `Deployment`:
[source,yaml]
--
volumeMounts:
- name: logstash-beats-tls
mountPath: /usr/share/logstash/config/ca.crt
subPath: ca.crt
- name: logstash-beats-tls
mountPath: /usr/share/logstash/config/server.pkcs8.key
subPath: server.pkcs8.key
- name: logstash-beats-tls
mountPath: /usr/share/logstash/config/server.crt
subPath: server.crt
volumes:
- name: logstash-beats-tls
secret:
secretName: logstash-beats-tls
--
On {filebeat}, configure the client certificates:
[source,yaml]
--
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: demo
spec:
type: filebeat
config:
output.logstash:
ssl.certificate_authorities: ["/usr/share/filebeat/ca.crt"]
ssl.certificate: "/usr/share/filebeat/client.crt"
ssl.key: "/usr/share/filebeat/client.key"
(...)
deployment:
podTemplate:
spec:
containers:
- name: filebeat
volumeMounts:
- name: logstash-beats-tls
mountPath: /usr/share/filebeat/ca.crt
subPath: ca.crt
- name: logstash-beats-tls
mountPath: /usr/share/filebeat/client.key
subPath: client.key
- name: logstash-beats-tls
mountPath: /usr/share/filebeat/client.crt
subPath: client.crt
volumes:
- name: logstash-beats-tls
secret:
secretName: logstash-beats-tls
--
[[security-eck-secrets]]
===== Securing connection to {es} on ECK
[[security-eck-secrets-pw]]
====== Authentication
ECK creates a user for every Elastic resource. To access these resources, such as {es}, {ls} needs a username and password.
The default username of {es} is `elastic`. You can also run the command to check the username:
[source,sh]
--
> kubectl describe secret demo-es-elastic-user
Name: demo-es-elastic-user
Namespace: default
Labels: common.k8s.elastic.co/type=elasticsearch
eck.k8s.elastic.co/credentials=true
eck.k8s.elastic.co/owner-kind=Elasticsearch
eck.k8s.elastic.co/owner-name=demo
eck.k8s.elastic.co/owner-namespace=default
elasticsearch.k8s.elastic.co/cluster-name=demo
Annotations: <none>
Type: Opaque
Data
====
elastic: 24 bytes <1>
--
<1> `elastic` is the username of the resources
To get the password, set `SecretKeyRef` and pass it as a container environment variable in `Deployment`:
[source,yaml]
--
spec:
containers:
- name: logstash
env:
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: demo-es-elastic-user
key: elastic
--
[[security-eck-secrets-self-signed]]
====== Using self-signed certificate
If your certificate is issued by a well-known CA, you can skip this section, otherwise, you need to mount the CA certificate from the `Secret` created by ECK.
[source,yaml]
--
volumeMounts:
- name: es-certs
mountPath: /usr/share/logstash/config/es_ca.crt
subPath: ca.crt
volumes:
- name: es-certs
secret:
secretName: demo-es-http-certs-public
--
[[security-k8s-secret]]
==== Using secrets
NOTE: This is for illustration purposes. In production, managing {k8s} secrets should be done using recognized link:https://kubernetes.io/docs/concepts/security/secrets-good-practices/[good practices] to ensure the protection of sensitive information.
To store sensitive information, such as a password, we can use a {k8s} `Secret`, and reference it as a container environment variable.
Encode confidential data with Base64:
[source,sh]
--
echo -n "changeme" | base64
--
NOTE: Base64 is an encoding algorithm not encryption.
Create `Secret` to hold the result of the encoding:
[source,yaml]
--
apiVersion: v1
kind: Secret
metadata:
name: logstash-secret
type: Opaque
data:
ES_PW: Y2hhbmdlbWU=
--
Reference the confidential data in `Deployment`:
[source,yaml]
--
spec:
containers:
- name: logstash
env:
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: logstash-secret
key: ES_PW
--
[[security-logstash-keystore]]
==== Using the {ls} keystore
{ls} can use the key of {logstash-ref}/keystore.html[keystore] in place of the confidential data when configure sensitive settings.
To create `Secret` from an existing keystore `logstash.keystore`:
[source,sh]
--
kubectl create secret generic logstash-keystore --from-file=logstash.keystore --dry-run=client -o yaml
--
Mount the `Secret` to the {ls} config directory in `Deployment`:
[source,yaml]
--
apiVersion: apps/v1
kind: Deployment
(...)
spec:
containers:
- name: logstash
env:
- name: LOGSTASH_KEYSTORE_PASS <1>
valueFrom:
secretKeyRef:
name: logstash-secret
key: LOGSTASH_KEYSTORE_PASS
(...)
volumeMounts:
- name: logstash-keystore
mountPath: /usr/share/logstash/config/logstash.keystore
subPath: logstash.keystore
volumes:
- name: logstash-keystore
secret:
secretName: logstash-keystore
--
<1> `LOGSTASH_KEYSTORE_PASS` is required when the keystore is protected by {logstash-ref}/keystore.html#keystore-password[password]

View file

@ -1,11 +0,0 @@
[[ls-k8s-setting-up]]
== Setting up {ls} and Kubernetes
++++
<titleabbrev>Setting up</titleabbrev>
++++
WARNING: This documentation is still in development and may be changed or removed in a future release.
The following topics describe important design considerations for your {ls} setup, as well as the steps to get your {ls} and Kubernetes environment up and running.
Note that before putting your environment into production, you should also follow our guidelines for <<ls-k8s-administering,administering your system>>.

View file

@ -1,16 +0,0 @@
[[ls-k8s-sizing]]
=== Sizing {ls} instances
WARNING: This documentation is still in development and may be changed or removed in a future release.
We have a few recommended heuristics to help you determine the optimal memory and queue sizings for your {ls} instances.
[[sizing-jvm-memory-pods]]
==== Memory settings on JVMs and pods
Description...
[[sizing-pd-dlq]]
==== Sizing your {ls} PQ and DLQ
Description...

View file

@ -1,60 +0,0 @@
[[ls-k8s-common-problems]]
=== Common problems
Following are some suggested resolutions to problems that you may encounter when running {ls} in a Kubernetes environment.
* <<problem-keep-restart>>
* <<problem-oom>>
[float]
[[problem-keep-restart]]
=== Logstash keeps restarting
When you check the running Kubernetes pods status, {ls} shows continual restarts.
[source,bash]
--
NAMESPACE NAME READY STATUS RESTARTS AGE
default logstash-f7768c66d-grzbj 0/1 Running 3 (55s ago) 6m32s
--
This can be caused by a few issues:
[float]
[[problem-nometric]]
==== Metrics API not accessible to `readinessProbe`
If the `readinessProbe` is unable to access the health check endpoint, the {ls} process will be continuously stopped and restarted. To fix it, set the following in `logstash.yml` in `ConfigMap`.
[source,bash]
--
api.http.host: 0.0.0.0
--
[float]
[[problem-delay]]
==== {ls} startup process takes longer than `initialDelaySeconds`
Review the time constraints of `readinessProbe` and `livenessProbe` to ensure that {ls} has enough time to start up and expose the health check endpoint for the `readiness` and `liveness` probes to access.
[float]
[[problem-insufficient]]
==== Insufficient CPU or memory to start {ls}
Review CPU and memory usage using `kubectl top pods` (requires metrics server to be available for your Kubernetes implementation).
* Set the values of `cpu` and `memory` in your `Deployment` or `StatefulSet` appropriately.
* Ensure that the JVM memory settings are set appropriately. The default `Xmx` value is `1g`, and we recommend that heap size is set to no more than 50-75% of total memory.
[float]
[[problem-oom]]
=== {ls} stops with OOM errors
The status of {ls} shows `Ready,` but the pod repeatedly stops running.
This situation can be caused by insufficient memory. If {ls} uses more memory than the declared resource, Kubernetes shutdowns the pod immediately and the {ls} log does not show any shutdown related message.
Run `kubectl get event --watch` or `kubectl describe pod` if the event status shows `OOMKilled`.
The resolution is similar to the remedy for the insufficient CPU or memory problem.
Review JVM and memory settings as shown in <<problem-insufficient>>.

View file

@ -1,124 +0,0 @@
[[ls-k8s-troubleshooting-methods]]
=== Troubleshooting tips and suggestions
Here are some approaches that you can use to diagnose the state of your {ls} and Kubernetes system, both in the event of any problems, and as part of a day-to-day approach to ensuring that everything is running as expected.
* <<ls-k8s-checking-resources>>
* <<ls-k8s-viewing-logs>>
* <<ls-k8s-connecting-to-a-container>>
* <<ls-k8s-diagnostics>>
* <<ls-k8s-pq-util>>
* <<ls-k8s-pq-drain>>
[float]
[[ls-k8s-checking-resources]]
=== Checking resources
You can use the standard Kubernetes `get` and `describe` commands to quickly gather details about any resources in your {ls} and Kubernetes environment.
[source,bash]
--
kubectl get pod logstash-7477d46bb7-4lcnv
NAME READY STATUS RESTARTS AGE
logstash-7477d46bb7-4lcnv 0/1 Pending 0 2m43s
--
If a Pod fails to reach the `Running` status after a few seconds, run this command to get more insights:
[source,bash]
--
kubectl describe pod logstash-7477d46bb7-4lcnv
(...)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 34s (x2 over 115s) default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
--
You can check the CPU and memory resources by running this command:
[source,bash]
--
kubectl top pod logstash-7477d46bb7-4lcnv
NAME CPU(cores) MEMORY(bytes)
logstash-7d5b749899-tfg4f 37m 882Mi
--
[float]
[[ls-k8s-viewing-logs]]
=== Viewing logs
{ls} Docker containers do not create log files by default. They log to standard output.
To view the log, run:
[source,bash]
--
kubectl logs -f logstash-7477d46bb7-4lcnv
--
To enable debug log, set `log.level: debug` in logstash.yml in `ConfigMap`.
[float]
[[ls-k8s-connecting-to-a-container]]
=== Connecting to a container
At times, you may need to connect directly from your command shell into {ls} and other Kubernetes resources.
[source,bash]
--
kubectl exec -it logstash-7477d46bb7-4lcnv -- bash
--
[float]
[[ls-k8s-diagnostics]]
=== Running diagnostics
Thread dumps and heap dumps can be helpful when you are debugging hard problems. Connect to the container, and then run the commands to gather the diagnostics.
==== Thread dump
[source,bash]
--
jdk/bin/jstack -l 1 > /tmp/jstack_output.txt
--
==== Heap dump
[source,bash]
--
jdk/bin/jcmd 1 GC.heap_dump /tmp/heap_dump.hprof
--
==== Extract file from the container
[source,bash]
--
kubectl cp logstash-7477d46bb7-4lcnv:/tmp/heap_dump.hprof ./heap.hprof
--
[[ls-k8s-pq-util]]
=== Running PQ utilities
In the event of persistent queue corruption, the `pqcheck` and `pqrepair` tools are available for troubleshooting.
Run {logstash-ref}/persistent-queues.html#pqcheck[pqcheck] to identify corrupted files:
[source,bash]
--
kubectl exec logstash-0 -it -- /usr/share/logstash/bin/pqcheck /usr/share/logstash/data/queue/pipeline_id
--
Run {logstash-ref}/persistent-queues.html#pqrepair[pqrepair] to repair the queue:
[source,bash]
--
kubectl exec logstash-0 -it -- /usr/share/logstash/bin/pqrepair /usr/share/logstash/data/queue/pipeline_id
--
[[ls-k8s-pq-drain]]
=== Draining the PQ
{ls} provides a `queue.drain: true` configuration setting to pause shutdown, when Logstash is stopped gracefully, until all messages from a persistent queue have been handled.
Special consideration needs to be taken when using the `queue.drain: true` setting when using {k8s}. By default, a {k8s} pod has a grace period of 30 seconds to shutdown before it is closed forcefully, via a `SIGKILL`, which may cause {ls} to exit before the queue is fully drained.
To avoid {ls} shutting down before the queue is completely drained, we recommend setting the `TerminationGracePeriodSeconds` value to an artifically long period, such as 1 year, to give {ls} sufficient time to drain the queue when this functionality is required.

View file

@ -1,12 +0,0 @@
[[ls-k8s-troubleshooting]]
== Troubleshooting {ls} and Kubernetes
++++
<titleabbrev>Troubleshooting</titleabbrev>
++++
WARNING: This documentation is still in development and may be changed or removed in a future release.
As you set up and run Logstash in Kubernetes you may at occasionally run into problems. The pages below describe how to resolve some of the more typical problems that may come up, as well as steps that you can use to diagnose issues and assess how your system is running in general.