mirror of
https://github.com/elastic/logstash.git
synced 2025-04-25 07:07:54 -04:00
Doc: Add topic and expand info for in-memory queue (#13246)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
This commit is contained in:
parent
50834d0f2c
commit
44ea102a7e
3 changed files with 67 additions and 1 deletions
|
@ -180,6 +180,9 @@ include::static/filebeat-modules.asciidoc[]
|
||||||
:edit_url: https://github.com/elastic/logstash/edit/{branch}/docs/static/resiliency.asciidoc
|
:edit_url: https://github.com/elastic/logstash/edit/{branch}/docs/static/resiliency.asciidoc
|
||||||
include::static/resiliency.asciidoc[]
|
include::static/resiliency.asciidoc[]
|
||||||
|
|
||||||
|
:edit_url: https://github.com/elastic/logstash/edit/{branch}/docs/static/mem-queue.asciidoc
|
||||||
|
include::static/mem-queue.asciidoc[]
|
||||||
|
|
||||||
:edit_url: https://github.com/elastic/logstash/edit/{branch}/docs/static/persistent-queues.asciidoc
|
:edit_url: https://github.com/elastic/logstash/edit/{branch}/docs/static/persistent-queues.asciidoc
|
||||||
include::static/persistent-queues.asciidoc[]
|
include::static/persistent-queues.asciidoc[]
|
||||||
|
|
||||||
|
|
61
docs/static/mem-queue.asciidoc
vendored
Normal file
61
docs/static/mem-queue.asciidoc
vendored
Normal file
|
@ -0,0 +1,61 @@
|
||||||
|
[[memory-queue]]
|
||||||
|
=== Memory queue
|
||||||
|
|
||||||
|
By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events.
|
||||||
|
If Logstash experiences a temporary machine failure, the contents of the memory queue will be lost.
|
||||||
|
Temporary machine failures are scenarios where Logstash or its host machine are terminated abnormally, but are capable of being restarted.
|
||||||
|
|
||||||
|
[[mem-queue-benefits]]
|
||||||
|
==== Benefits of memory queues
|
||||||
|
|
||||||
|
The memory queue might be a good choice if you value throughput over data resiliency.
|
||||||
|
|
||||||
|
* Easier configuration
|
||||||
|
* Easier management and administration
|
||||||
|
* Faster throughput
|
||||||
|
|
||||||
|
[[mem-queue-limitations]]
|
||||||
|
==== Limitations of memory queues
|
||||||
|
|
||||||
|
* Can lose data in abnormal termination
|
||||||
|
* Don't do well handling sudden bursts of data, where extra capacity in needed for {ls} to catch up
|
||||||
|
|
||||||
|
TIP: Consider using <<persistent-queues,persistent queues>> to avoid these limitations.
|
||||||
|
|
||||||
|
[[sizing-mem-queue]]
|
||||||
|
==== Memory queue size
|
||||||
|
|
||||||
|
Memory queue size is not configured directly.
|
||||||
|
It is defined by the number of events, the size of which can vary greatly depending on the event payload.
|
||||||
|
|
||||||
|
The maximum number of events that can be held in each memory queue is equal to
|
||||||
|
the value of `pipeline.batch.size` multiplied by the value of
|
||||||
|
`pipeline.workers`.
|
||||||
|
This value is called the "inflight count."
|
||||||
|
|
||||||
|
NOTE: Each pipeline has its own queue.
|
||||||
|
|
||||||
|
See <<tuning-logstash>> for more info on the effects of adjusting `pipeline.batch.size` and `pipeline.workers`.
|
||||||
|
|
||||||
|
[[mq-settings]]
|
||||||
|
===== Settings that affect queue size
|
||||||
|
|
||||||
|
These values can be configured in `logstash.yml` and `pipelines.yml`.
|
||||||
|
|
||||||
|
pipeline.batch.size::
|
||||||
|
Number events to retrieve from inputs before sending to filters+workers
|
||||||
|
The default is 125.
|
||||||
|
|
||||||
|
pipelines.workers::
|
||||||
|
Number of workers that will, in parallel, execute the filters+outputs stage of the pipeline.
|
||||||
|
This value defaults to the number of the host's CPU cores.
|
||||||
|
|
||||||
|
[[backpressure-mem-queue]]
|
||||||
|
==== Back pressure
|
||||||
|
|
||||||
|
When the queue is full, Logstash puts back pressure on the inputs to stall data
|
||||||
|
flowing into Logstash.
|
||||||
|
This mechanism helps Logstash control the rate of data flow at the input stage
|
||||||
|
without overwhelming outputs like Elasticsearch.
|
||||||
|
|
||||||
|
Each input handles back pressure independently.
|
4
docs/static/resiliency.asciidoc
vendored
4
docs/static/resiliency.asciidoc
vendored
|
@ -1,5 +1,7 @@
|
||||||
[[resiliency]]
|
[[resiliency]]
|
||||||
== Data resiliency
|
== Queues and data resiliency
|
||||||
|
|
||||||
|
By default, Logstash uses <<memory-queue,in-memory bounded queues>> between pipeline stages (inputs → pipeline workers) to buffer events.
|
||||||
|
|
||||||
As data flows through the event processing pipeline, Logstash may encounter
|
As data flows through the event processing pipeline, Logstash may encounter
|
||||||
situations that prevent it from delivering events to the configured
|
situations that prevent it from delivering events to the configured
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue