This spec had a race, the start_agent invocation this depends on doesn't wait
until the state has converged at least once.
An example failure is here: https://logstash-ci.elastic.co/job/elastic+logstash+6.x+multijob-unix-compatibility/os=debian/121/console
The message was:
```
22:34:08 Failures:
22:34:08
22:34:08 1) LogStash::Agent Agent execute options when `config.reload.automatic` is set to`FALSE` and successfully load the config converge only once
22:34:08 Failure/Error: expect(source_loader.fetch_count).to eq(1)
22:34:08
22:34:08 expected: 1
22:34:08 got: 0
22:34:08
22:34:08 (compared using ==)
22:34:08 # /var/lib/jenkins/workspace/elastic+logstash+6.x+multijob-unix-compatibility/os/debian/logstash-core/spec/logstash/agent/converge_spec.rb:120:in `block in (root)'
22:34:08 # /var/lib/jenkins/workspace/elastic+logstash+6.x+multijob-unix-compatibility/os/debian/spec/spec_helper.rb:50:in `block in /var/lib/jenkins/workspace/elastic+logstash+6.x+multijob-unix-compatibility/os/debian/spec/spec_helper.rb'
22:34:08 # /var/lib/jenkins/workspace/elastic+logstash+6.x+multijob-unix-compatibility/os/debian/spec/spec_helper.rb:43:in `block in /var/lib/jenkins/workspace/elastic+logstash+6.x+multijob-unix-compatibility/os/debian/spec/spec_helper.rb'
22:34:08 # /var/lib/jenkins/workspace/elastic+logstash+6.x+multijob-unix-compatibility/os/debian/vendor/bundle/jruby/2.3.0/gems/rspec-wait-0.0.9/lib/rspec/wait.rb:46:in `block in /var/lib/jenkins/workspace/elastic+logstash+6.x+multijob-unix-compatibility/os/debian/vendor/bundle/jruby/2.3.0/gems/rspec-wait-0.0.9/lib/rspec/wait.rb'
22:34:08 # /var/lib/jenkins/workspace/elastic+logstash+6.x+multijob-unix-compatibility/os/debian/lib/bootstrap/rspec.rb:13:in `<main>'
22:34:08
22:34:08 Finished in 5 minutes 27 seconds (files took 8.75 seconds to load)
22:34:08 2877 examples, 1 failure, 5 pending
```
Fixes#9643
These agent specs occasionally time out. This removes our use of the unreliable Timeout gem
and also lengthens the waiting period before failure
Fixes#9639
I actually am not sure if this matters, but it seems like it won't hurt,
and is a more sane ordering of things. I have a suspicion that in some cases
the agents get stuck and don't shut down because of this ordering, but I can't
prove it
Fixes#9628
the path.plugin setting is a multi valued option, clamp uses
an implicit appender method for adding values to an array.
because we patch Clamp to use LogStash::SETTINGS underneath,
we must tune both reading/writing and appending values.
This last wasn't being customized, so this path redefines the
append_method method to add an element to a setting's value
Fixes#8887
* Add getter for pipeline protocol
* Refactoring: extract bare string into constant
* Report pipeline protocol to Monitoring as part of logstash_state docs
* Report all protocols, not just first one
* Refactoring: rename constant to be more descriptive
* De-dupe protocols list before sending to monitoring
* Checking for single protocol
* Adding missing require
This was being masked and was uncovered when this test suite was run by itself
* Raising error if multiple protocols are specified
* Adding back comma
Prior to this a single worker could slurp down multiple shutdown messages, this prevents that from happening by using a flag that can't be overconsumed.
Fixes#9285