Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Jeff Buchbinder 2011-08-01 10:36:07 -04:00
commit 2f1063b56b
4 changed files with 71 additions and 9 deletions

View file

@ -15,3 +15,5 @@ Contributors:
* kjoconnor
* Evgeny Zislis (kesor)
* Johan Venter
* Jeff Buchbinder (freemed)
* Dan Peterson (dpiddy)

View file

@ -0,0 +1,60 @@
---
title: Logging tools comparisons - logstash
layout: content_right
---
# Logging tools comparison
The information below is provided as "best effort" and is not strictly intended
as a complete source of truth. If the information below is unclear or incorrect, please
email the logstash-users list (or send a pull request with the fix) :)
Where feasible, this document will also provide information on how you can use
logstash with these other projects.
# logstash
Primary goal: Make log/event data and analytics accessible.
Overview: Where your logs come from, how you store them, or what you do with
them is up to you. Logstash exists to help make such actions easier and faster.
It provides you a simple event pipeline for taking events and logs from any
input, manipulating them with filters, and sending them to any output. Inputs
can be files, network, message brokers, etc. Filters are date and string
parsers, grep-like, etc. Outputs are data stores (elasticsearch, mongodb, etc),
message systems (amqp, stomp, etc), network (tcp, syslog), etc.
It also provides a web interface for doing search and analytics on your
logs.
# graylog2
<http://graylog2.org/>
TBD
You can use graylog2 with logstash by using the 'gelf' output to send logstash
events to a graylog2 server. This gives you logstash's excellent input and
filter features while still being able to use the graylog2 web interface.
# whoops
<http://www.whoopsapp.com/>
TBD
A logstash output to whoops is coming soon - <https://logstash.jira.com/browse/LOGSTASH-133>
# flume
<https://github.com/cloudera/flume/wiki>
Flume is primarily a transport system aimed at reliably copying logs from
application servers to HDFS.
You can use it with logstash by having a syslog sink configured to shoot logs
at a logstash syslog input.
# scribe
TBD

View file

@ -25,15 +25,15 @@ require "logstash/outputs/base"
# "nagios_host", "%{@source_host}",
# "nagios_service", "the name of your nagios service check"
# ]
# }
# }
# }
# }
#
# output{
# nagios {
# # only process events with this tag
# tags => "nagios-update"
# }
# }
# output{
# nagios {
# # only process events with this tag
# tags => "nagios-update"
# }
# }
class LogStash::Outputs::Nagios < LogStash::Outputs::Base
NAGIOS_CRITICAL = 2
NAGIOS_WARN = 1

View file

@ -2,4 +2,4 @@ HAPROXYDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME}.%{INT:milliseconds}
HAPROXYTERMINATIONSTATE [CAPRIcs-][RQCHDLT-][NIDV-][NIPRD-]
# parse an haproxy 'httplog' line
HAPROXYHTTP %{SYSLOGDATE:date} %{IPORHOST:server} %{SYSLOGPROG}: %{IP:clientip}:%{INT:clientport} \[%{HAPROXYDATE:haproxydate}\] %{NOTSPACE:proxyname} %{NOTSPACE}/%{IPORHOST:backend} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{INT:time_duration} %{INT:response} %{INT:bytes} - - %{HAPROXYTERMINATIONSTATE:terminationstate} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn} %{INT:srv_queue}/%{INT:backend_queue} "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:version}"
HAPROXYHTTP %{SYSLOGTIMESTAMP:timestamp} %{IPORHOST:server} %{SYSLOGPROG}: %{IP:clientip}:%{INT:clientport} \[%{HAPROXYDATE:haproxydate}\] %{NOTSPACE:proxyname} %{NOTSPACE}/%{IPORHOST:backend} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{INT:time_duration} %{INT:response} %{INT:bytes} - - %{HAPROXYTERMINATIONSTATE:terminationstate} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn} %{INT:srv_queue}/%{INT:backend_queue} "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:version}"