mirror of
https://github.com/elastic/logstash.git
synced 2025-04-24 06:37:19 -04:00
parent
7f0d529c13
commit
5ee5431707
1 changed files with 15 additions and 11 deletions
26
docs/static/dead-letter-queues.asciidoc
vendored
26
docs/static/dead-letter-queues.asciidoc
vendored
|
@ -19,8 +19,9 @@ information about the plugin that wrote the event, and the timestamp for when
|
|||
the event entered the dead letter queue.
|
||||
|
||||
To process events in the dead letter queue, you simply create a Logstash
|
||||
pipeline configuration that uses the `dead_letter_queue` input plugin
|
||||
to read from the queue.
|
||||
pipeline configuration that uses the
|
||||
<<plugins-inputs-dead_letter_queue,`dead_letter_queue` input plugin>> to read
|
||||
from the queue.
|
||||
|
||||
image::static/images/dead_letter_queue.png[Diagram showing pipeline reading from the dead letter queue]
|
||||
|
||||
|
@ -72,12 +73,13 @@ this setting.
|
|||
==== Processing Events in the Dead Letter Queue
|
||||
|
||||
When you are ready to process events in the dead letter queue, you create a
|
||||
pipeline that uses the `dead_letter_queue` input plugin to read from the dead
|
||||
letter queue. The pipeline configuration that you use depends, of course, on
|
||||
what you need to do. For example, if the dead letter queue contains events that
|
||||
resulted from a mapping error in Elasticsearch, you can create a pipeline that
|
||||
reads the "dead" events, removes the field that caused the mapping issue, and
|
||||
re-indexes the clean events into Elasticsearch.
|
||||
pipeline that uses the
|
||||
<<plugins-inputs-dead_letter_queue,`dead_letter_queue` input plugin>> to read
|
||||
from the dead letter queue. The pipeline configuration that you use depends, of
|
||||
course, on what you need to do. For example, if the dead letter queue contains
|
||||
events that resulted from a mapping error in Elasticsearch, you can create a
|
||||
pipeline that reads the "dead" events, removes the field that caused the mapping
|
||||
issue, and re-indexes the clean events into Elasticsearch.
|
||||
|
||||
The following example shows a simple pipeline that reads events from the dead
|
||||
letter queue and writes the events, including metadata, to standard output:
|
||||
|
@ -121,8 +123,10 @@ queue, it will continue to run and process new events as they stream into the
|
|||
queue. This means that you do not need to stop your production system to handle
|
||||
events in the dead letter queue.
|
||||
|
||||
NOTE: Events emitted from the dead letter queue input plugin will not be resubmitted to the
|
||||
dead letter queue if they cannot be processed correctly
|
||||
NOTE: Events emitted from the
|
||||
<<plugins-inputs-dead_letter_queue,`dead_letter_queue` input plugin>> plugin
|
||||
will not be resubmitted to the dead letter queue if they cannot be processed
|
||||
correctly.
|
||||
|
||||
[[dlq-timestamp]]
|
||||
==== Reading From a Timestamp
|
||||
|
@ -206,7 +210,7 @@ output {
|
|||
}
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
<1> The `dead_letter_queue` input reads from the dead letter queue.
|
||||
<1> The <<plugins-inputs-dead_letter_queue,`dead_letter_queue` input>> reads from the dead letter queue.
|
||||
<2> The `mutate` filter removes the problem field called `location`.
|
||||
<3> The clean event is sent to Elasticsearch, where it can be indexed because
|
||||
the mapping issue is resolved.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue