Commit graph

13 commits

Author SHA1 Message Date
Toby McLaughlin
9b32b441f1
Doc: remove PQ on input for Output Isolator (#13537)
The Output Isolator Pattern doesn't need a persisted queue on the input
pipeline to work. It just needs one on every output pipeline.
2021-12-17 17:49:52 -05:00
Karen Metts
63ed2da5ec
Doc: Expand PQ content for pipeline-pipeline (#13319) 2021-11-18 20:07:24 -05:00
Luca Belluccini
5de9b237e3 Better wording thanks to Andrea Selva
Fixes #11685
2020-04-01 13:09:13 +00:00
Luca Belluccini
b2332cb015 Clarify behavior in case of PQ full & isolator pattern
Fixes #11685
2020-04-01 13:09:13 +00:00
Karen Metts
4b47f28e40 Restructure configuration content
Fixes #11310
2019-11-18 20:39:46 +00:00
Joao Duarte
af7e047fbf remove mention of pipeline to pipeline being Beta
Fixes #11150
2019-09-19 10:05:19 +00:00
Dan Hermann
d61aee6a2c Clarify behavior of ensure_delivery flag
Fixes #10754
2019-05-06 12:33:13 +00:00
Karen Metts
712ba6cbf1 Add note that pline-pline also supports files
Fixes #10590
2019-03-27 18:43:31 +00:00
Alex Scoble
0ecdc95e42 Quotes around pipeline names with dashes
Added quotes surrounding pipeline references that have dashes as logstash will throw an error otherwise

Fixes #9903
2018-09-04 20:57:51 +00:00
Andrew Cholakian
9311f8b8d0 The initial implementation of inter-pipeline comms doesn't handle inter-pipeline dependencies correctly.
It just blocks and doesn't handle the concurrency situation. One can think of the network of connected pipelines as a DAG (We explicitly ask users not to create cycles in our docs). In fact there are two different correct answers to the pipeline shutdown and reload problem.

When reloading pipelines we should assume the user understands whatever changes they're making to the topology. If a downstream pipeline is to be removed, we can assume that's intentional. We don't lose any data from upstream pipelines since those block until that downstream pipeline is restored. To accomplish this none of the `PipelineBus` methods block by default.

When shutting down Logstash we must: 1.) not lose in-flight data, and 2.) not deadlock. The only way to do both is to shutdown the pipelines in order. All state changes happen simultaneously on all piping via multithreading. As a result we don't need to implement a Topological sort or other algorithm to order dependencies, we simply issue a shutdown to all pipelines, and have ones that are dependent block waiting for upstream pipelines.

This patch also correctly handles the case where multiple signals cause pipeline actions to be created simultaneously. If we see two concurrent creates or stops, one of those actions becomes a noop.

Currently the Logstash plugin API has lifecycle methods for starting and stopping, but not reloading. We need to call a different method when a `pipeline` input is stopped due to a pipeline reload vs an agent shutdown. Ideally, we would enrich our plugin API. In the interest of expedience here, however, I added a flag to the `PipelineBus` that changes the shutdown mode of `pipeline` inputs, to be either blocking or non-blocking. It is switched to blocking if a shutdown is triggered.

This also reverts b78d32dede in favor of a better more concurrent approach without mutexes

This is a forward port of https://github.com/elastic/logstash/pull/9650 to master / 6.x

Fixes #9656
2018-05-24 13:59:39 +00:00
Karen Metts
f6eccf081b Fix forked path exp
Fixes #9419
2018-04-26 20:07:46 +00:00
Karen Metts
bb823a5c58
Doc for pipeline-to-pipeline communication (#9385)
* Doc for pipeline-to-pipeline

* Incorporate review comments

* Add link for collector pattern
2018-04-17 19:53:02 -04:00
Andrew Cholakian
a1c0e417e5 Support for inter-pipeline comms with a new pipeline input/output
This also makes the load / reload cycle of pipelines threadsafe
and concurrent in the Agent class.

Fixes #9225
2018-04-10 23:48:58 +00:00