Document copy semantics of QueueWriter::push method

Fixes #10808
This commit is contained in:
Dan Hermann 2019-05-21 07:49:38 -05:00
parent 9a06b204bf
commit 2d35566ab2
3 changed files with 17 additions and 8 deletions

View file

@ -241,7 +241,12 @@ example above, the `decode` method simply splits the incoming byte stream on the
specified delimiter. A production-grade codec such as
https://github.com/elastic/logstash/blob/master/logstash-core/src/main/java/org/logstash/plugins/codecs/Line.java[`java-line`]
would not make the simplifying assumption that the end of the supplied byte
stream corresponded with the end of an event.
stream corresponded with the end of an event.
Events should be constructed as instances of `Map<String, Object>` and pushed into the event pipeline via the
`Consumer<Map<String, Object>>.accept()` method. To reduce allocations and GC pressure, codecs may reuse the same
map instance by modifying its fields between calls to `Consumer<Map<String, Object>>.accept()` because the event
pipeline will create events based on a copy of the map's data.
The `flush` method works in coordination with the `decode` method to decode all
remaining events from the specified `ByteBuffer` along with any internal state

View file

@ -189,16 +189,18 @@ public void start(Consumer<Map<String, Object>> consumer) {
The `start` method begins the event-producing loop in an input. Inputs are flexible and may produce events through
many different mechanisms including:
* a pull mechanism such as periodic queries of external database</li>
* a push mechanism such as events sent from clients to a local network port</li>
* a timed computation such as a heartbeat</li>
* a pull mechanism such as periodic queries of external database
* a push mechanism such as events sent from clients to a local network port
* a timed computation such as a heartbeat
* any other mechanism that produces a useful stream of events. Event streams may be either finite or infinite.
If the input produces an infinite stream of events, this method should loop until a stop request is made through
the `stop` method. If the input produces a finite stream of events, this method should terminate when the last
event in the stream is produced or a stop request is made, whichever comes first.
Events should be constructed as instances of `Map<String, Object>` and pushed into the event pipeline via the
`Consumer<Map<String, Object>>.accept()` method.
`Consumer<Map<String, Object>>.accept()` method. To reduce allocations and GC pressure, inputs may reuse the same
map instance by modifying its fields between calls to `Consumer<Map<String, Object>>.accept()` because the event
pipeline will create events based on a copy of the map's data.
[float]
==== Stop and awaitStop methods

View file

@ -3,14 +3,16 @@ package org.logstash.execution.queue;
import java.util.Map;
/**
* Writes to the Queue.
* Writes to the queue.
*/
public interface QueueWriter {
/**
* Pushes a single event to the Queue, blocking indefinitely if the Queue is not ready for a
* write.
* @param event Logstash Event Data
* write. Implementations of this interface must produce events from a deep copy of the supplied
* map because upstream clients of this interface may reuse map instances between calls to push.
*
* @param event Logstash event data
*/
void push(Map<String, Object> event);
}