Merge branch 'master' of github.com:logstash/logstash

Conflicts:
	Makefile
This commit is contained in:
Jordan Sissel 2013-09-02 22:39:25 -07:00
commit 48ca389837
16 changed files with 211 additions and 44 deletions

View file

@ -15,6 +15,8 @@
the prior and undocumented syntax for field access (was 'foo.bar' and is
now '[foo][bar]'). Learn more about this here:
https://logstash.net/docs/latest/configuration#fieldreferences
- A saner hash syntax in the logstash config is now supported. It uses the
perl/ruby hash-rocket syntax: { "key" => "value", ... } (LOGSTASH-728)
- ElasticSearch version 0.90.3 is included. (#486, Gang Chen)
- The elasticsearch plugin now uses the bulk index api which should result
in lower cpu usage as well as higher performance than the previous
@ -28,8 +30,16 @@
tcp, redis, rabbitmq) from serialization (gzip text, json, msgpack, etc).
- Improved error messages that try to be helpful. If you see bad or confusing
error messages, it is a bug, so let us know! (Patch by Nick Ethier)
- The old 'plugin status' concept has been replaced by 'milestones'
(LOGSTASH-1137)
- SIGHUP should cause logstash to reopen it's logfile if you are using the
--log flag
## inputs
- new: s3: reads files from s3 (#537, patch by Mathieu Guillaume)
- feature: imap: now marks emails as read (#542, Raffael Schmid)
- feature: imap: lets you delete read email (#591, Jonathan Van Eenwyk)
- feature: rabbitmq: now well-supported again (patches by Michael Klishin)
- bugfix: gelf: work around gelf parser errors (#476, patch by Chris McCoy)
- broken: the twitter input is disabled because the twitter stream v1 api is
no longer supported and I couldn't find a replacement library that works
@ -54,7 +64,7 @@
- feature: grok now defaults 'singles' to true, meaning captured fields are
stored as single values in most cases instead of the old behavior of being
captured as an array of values.
- new: json_encoder filter (Ralph Meijer)
- new: json_encoder filter (#554, patch by Ralph Meijer)
- new: cipher: gives you many options for encrypting fields (#493, patch by
saez0pub)
- feature: kv: new settings include_fields and exclude_fields. (patch by
@ -63,17 +73,32 @@
(#491, patch by Richard Pijnenburg)
- feature: dns: now accepts custom nameservers to query (#495, patch by
Nikolay Bryskin)
- feature: dns: now accepts a timeout setting (#507, patch by Jay Luker)
- bugfix: ruby: multiple ruby filter instances now work (#501, patch by
Nikolay Bryskin)
- feature: uuid: new filter to add a uuid to each event (#531, Tomas Doran)
- feature: useragent: added 'prefix' setting to prefix field names created
by this filter. (#524, patch by Jay Luker)
- bugfix: mutate: strip works now (#590, Jonathan Van Eenwyk)
## outputs
- feature: rabbitmq: now well-supported again (patches by Michael Klishin)
- improvement: stomp: vhost support (Patch by Matt Dainty)
- feature: elasticsearch: now uses the bulk index api and supports
a tunable bulk flushing size.
- feature: elasticsearch_http: will now flush when idle instead of always
waiting for a full buffer. This helps in slow-sender situations such
as testing by hand.
- feature: irc: add messages_per_second tunable (LOGSTASH-962)
- bugfix: emails: restored initial really useful documentation
- bugfix: email: restored initial really useful documentation
- improvement: emails: allow @message, @source, @... in match (LOGSTASH-826,
LOGSTASH-823)
- feature: email: can now set Reply-To (#540, Tim Meighen)
- feature: mongodb: replica sets are supported (#389, patch by Mathias Gug)
- new: s3: New plugin to write to amazon S3 (#439, patch by Mattia Peterle)
- feature: statsd: now supports 'set' metrics (#513, patch by David Warden)
- feature: sqs: now supports batching (#522, patch by AaronTheApe)
- feature: ganglia: add slope and group settings (#583, patch by divanikus)
1.1.13 (May 28, 2013)
## general

View file

@ -33,7 +33,7 @@ endif
TESTS=$(wildcard spec/support/*.rb spec/filters/*.rb spec/examples/*.rb spec/codecs/*.rb spec/conditionals/*.rb spec/event.rb spec/jar.rb)
#spec/outputs/graphite.rb spec/outputs/email.rb)
default:
default:
@echo "Make targets you might be interested in:"
@echo " flatjar -- builds the flatjar jar"
@echo " flatjar-test -- runs the test suite against the flatjar"
@ -117,7 +117,7 @@ vendor/geoip: | vendor
$(QUIET)mkdir $@
$(GEOIP): | vendor/geoip
$(QUIET)wget -q -O $@.tmp.gz $(GEOIP_URL)
$(QUIET)$(DOWNLOAD_COMMAND) $@.tmp.gz $(GEOIP_URL)
$(QUIET)gzip -dc $@.tmp.gz > $@.tmp
$(QUIET)mv $@.tmp $@
@ -136,7 +136,7 @@ vendor/bundle: | vendor $(JRUBY)
$(QUIET)GEM_HOME=./vendor/bundle/jruby/1.9/ GEM_PATH= $(JRUBY_CMD) --1.9 ./gembag.rb logstash.gemspec
@# Purge old version of json
#$(QUIET)GEM_HOME=./vendor/bundle/jruby/1.9/ GEM_PATH= $(JRUBY_CMD) --1.9 -S gem uninstall json -v 1.6.5
@# Purge old versions of gems installed because gembag doesn't do
@# Purge old versions of gems installed because gembag doesn't do
@# dependency resolution correctly.
$(QUIET)GEM_HOME=./vendor/bundle/jruby/1.9/ GEM_PATH= $(JRUBY_CMD) --1.9 -S gem uninstall addressable -v 2.2.8
@# uninstall the newer ffi (1.1.5 vs 1.3.1) because that's what makes
@ -173,7 +173,7 @@ build/monolith: compile copy-ruby-files vendor/jar/graphtastic-rmiclient.jar
$(QUIET)mkdir -p $@/META-INF/services/
$(QUIET)find $$PWD/vendor/bundle $$PWD/vendor/jar -name '*.jar' \
| xargs $(JRUBY_CMD) extract_services.rb -o $@/META-INF/services
@# copy openssl/lib/shared folders/files to root of jar
@# copy openssl/lib/shared folders/files to root of jar
@#- need this for openssl to work with JRuby
$(QUIET)mkdir -p $@/openssl
$(QUIET)mkdir -p $@/jopenssl
@ -332,7 +332,7 @@ sync-jira-components: $(addprefix create/jiracomponent/,$(subst lib/logstash/,,$
-$(QUIET)$(JIRACLI) --action run --file tmp_jira_action_list --continue > /dev/null 2>&1
$(QUIET)rm tmp_jira_action_list
create/jiracomponent/%:
create/jiracomponent/%:
$(QUIET)echo "--action addComponent --project LOGSTASH --name $(subst create/jiracomponent/,,$@)" >> tmp_jira_action_list
## Release note section (up to you if/how/when to integrate in docs)
@ -341,19 +341,19 @@ create/jiracomponent/%:
# - issues for FixVersion from JIRA
# Note on used Github logic
# We parse the commit between the last tag (should be the last release) and HEAD
# We parse the commit between the last tag (should be the last release) and HEAD
# to extract all the notice about merged pull requests.
# Note on used JIRA release note URL
# The JIRA Release note list all issues (even open ones)
# The JIRA Release note list all issues (even open ones)
# with Fix Version assigned to target version
# So one must verify manually that there is no open issue left (TODO use JIRACLI)
# This is the ID for a version item in jira, can be obtained by CLI
# This is the ID for a version item in jira, can be obtained by CLI
# or through the Version URL https://logstash.jira.com/browse/LOGSTASH/fixforversion/xxx
JIRA_VERSION_ID=10820
releaseNote:
releaseNote:
-$(QUIET)rm releaseNote.html
$(QUIET)curl -si "https://logstash.jira.com/secure/ReleaseNote.jspa?version=$(JIRA_VERSION_ID)&projectId=10020" | sed -n '/<textarea.*>/,/<\/textarea>/p' | grep textarea -v >> releaseNote.html
$(QUIET)ruby pull_release_note.rb

View file

@ -51,6 +51,12 @@ class LogStash::Agent < Clamp::Command
raise LogStash::ConfigurationError, message
end # def fail
def report(message)
# Print to stdout just in case we're logging to a file
puts message
@logger.log(message) if log_file
end
# Run the agent. This method is invoked after clamp parses the
# flags given to this program.
def execute
@ -64,12 +70,18 @@ class LogStash::Agent < Clamp::Command
return 0
end
# temporarily send logs to stdout as well if a --log is specified
# and stdout appears to be a tty
show_startup_errors = log_file && STDOUT.tty?
if show_startup_errors
stdout_logs = @logger.subscribe(STDOUT)
end
configure
# You must specify a config_string or config_path
if config_string.nil? && config_path.nil?
puts help
fail(I18n.t("logstash.agent.missing-configuration"))
fail(help + "\n", I18n.t("logstash.agent.missing-configuration"))
end
if @config_path
@ -104,17 +116,22 @@ class LogStash::Agent < Clamp::Command
pipeline.configure("filter-workers", filter_workers)
@logger.unsubscribe(stdout_logs) if show_startup_errors
# TODO(sissel): Get pipeline completion status.
pipeline.run
return 0
rescue LogStash::ConfigurationError => e
puts I18n.t("logstash.agent.error", :error => e)
@logger.unsubscribe(stdout_logs) if show_startup_errors
report I18n.t("logstash.agent.error", :error => e)
return 1
rescue => e
puts I18n.t("oops", :error => e)
puts e.backtrace if @logger.debug? || $DEBUGLIST.include?("stacktrace")
@logger.unsubscribe(stdout_logs) if show_startup_errors
report I18n.t("oops", :error => e)
report e.backtrace if @logger.debug? || $DEBUGLIST.include?("stacktrace")
return 1
ensure
@log_fd.close if @log_fd
Stud::untrap("INT", trap_id) unless trap_id.nil?
end # def execute
@ -201,19 +218,20 @@ class LogStash::Agent < Clamp::Command
end
if !log_file.nil?
if log_file
# TODO(sissel): Implement file output/rotation in Cabin.
# TODO(sissel): Catch exceptions, report sane errors.
begin
file = File.new(path, "a")
@log_fd.close if @log_fd
@log_fd = File.new(path, "a")
rescue => e
fail(I18n.t("logstash.agent.configuration.log_file_failed",
:path => path, :error => e))
end
puts "Sending all output to #{path}."
puts "Sending logstash logs to #{path}."
@logger.unsubscribe(@logger_subscription) if @logger_subscription
@logger_subscription = @logger.subscribe(file)
@logger_subscription = @logger.subscribe(@log_fd)
else
@logger.subscribe(STDOUT)
end
@ -264,4 +282,5 @@ class LogStash::Agent < Clamp::Command
end
return config
end # def load_config
end # class LogStash::Agent

View file

@ -0,0 +1,75 @@
require "logstash/filters/base"
require "logstash/namespace"
require "ipaddr"
# The CIDR filter is for checking IP addresses in events against a list of
# network blocks that might contain it. Multiple addresses can be checked
# against multiple networks, any match succeeds. Upon success additional tags
# and/or fields can be added to the event.
class LogStash::Filters::CIDR < LogStash::Filters::Base
config_name "cidr"
plugin_status "experimental"
# The IP address(es) to check with. Example:
#
# filter {
# %PLUGIN% {
# add_tag => [ "testnet" ]
# address => [ "%{src_ip}", "%{dst_ip}" ]
# network => [ "192.0.2.0/24" ]
# }
# }
config :address, :validate => :array, :default => []
# The IP network(s) to check against. Example:
#
# filter {
# %PLUGIN% {
# add_tag => [ "linklocal" ]
# address => [ "%{clientip}" ]
# network => [ "169.254.0.0/16", "fe80::/64" ]
# }
# }
config :network, :validate => :array, :default => []
public
def register
# Nothing
end # def register
public
def filter(event)
return unless filter?(event)
address = @address.collect do |a|
begin
IPAddr.new(event.sprintf(a))
rescue ArgumentError => e
@logger.warn("Invalid IP address, skipping", :address => a, :event => event)
nil
end
end
address.compact!
network = @network.collect do |n|
begin
IPAddr.new(event.sprintf(n))
rescue ArgumentError => e
@logger.warn("Invalid IP network, skipping", :network => n, :event => event)
nil
end
end
network.compact!
# Try every combination of address and network, first match wins
address.product(network).each do |a, n|
@logger.debug("Checking IP inclusion", :address => a, :network => n)
if n.include?(a)
filter_matched(event)
return
end
end
end # def filter
end # class LogStash::Filters::CIDR

View file

@ -92,7 +92,6 @@ class LogStash::Filters::Xml < LogStash::Filters::Base
begin
doc = Nokogiri::XML(value)
rescue => e
p :failed => value
event.tag("_xmlparsefailure")
@logger.warn("Trouble parsing xml", :source => @source, :value => value,
:exception => e, :backtrace => e.backtrace)

View file

@ -63,7 +63,7 @@ class LogStash::Inputs::Log4j < LogStash::Inputs::Base
event["logger_name"] = log4j_obj.getLoggerName
event["thread"] = log4j_obj.getThreadName
event["class"] = log4j_obj.getLocationInformation.getClassName
event["file"] = log4j_obj.getLocationInformation.getFileName + ":" + log4j_obj.getLocationInformation.getLineNumber,
event["file"] = log4j_obj.getLocationInformation.getFileName + ":" + log4j_obj.getLocationInformation.getLineNumber
event["method"] = log4j_obj.getLocationInformation.getMethodName
event["NDC"] = log4j_obj.getNDC if log4j_obj.getNDC
event["stack_trace"] = log4j_obj.getThrowableStrRep.to_a.join("\n") if log4j_obj.getThrowableInformation

View file

@ -2,7 +2,7 @@ require "logstash/namespace"
require "cabin"
require "logger"
class LogStash::Logger
class LogStash::Logger
attr_accessor :target
public
@ -19,7 +19,7 @@ class LogStash::Logger
# causes Cabin to subscribe to STDOUT maaaaaany times.
subscriptions = @channel.instance_eval { @subscribers.count }
@channel.subscribe(@target) unless subscriptions > 0
# Set default loglevel to WARN unless $DEBUG is set (run with 'ruby -d')
@level = $DEBUG ? :debug : :warn
if ENV["LOGSTASH_DEBUG"]
@ -46,8 +46,7 @@ class LogStash::Logger
def self.setup_log4j(logger)
require "java"
#p = java.util.Properties.new(java.lang.System.getProperties())
p = java.util.Properties.new
properties = java.util.Properties.new
log4j_level = "WARN"
case logger.level
when :debug
@ -57,33 +56,33 @@ class LogStash::Logger
when :warn
log4j_level = "WARN"
end # case level
p.setProperty("log4j.rootLogger", "#{log4j_level},logstash")
properties.setProperty("log4j.rootLogger", "#{log4j_level},logstash")
# TODO(sissel): This is a shitty hack to work around the fact that
# LogStash::Logger isn't used anymore. We should fix that.
target = logger.instance_eval { @subscribers }.values.first.instance_eval { @io }
case target
when STDOUT
p.setProperty("log4j.appender.logstash",
properties.setProperty("log4j.appender.logstash",
"org.apache.log4j.ConsoleAppender")
p.setProperty("log4j.appender.logstash.Target", "System.out")
properties.setProperty("log4j.appender.logstash.Target", "System.out")
when STDERR
p.setProperty("log4j.appender.logstash",
properties.setProperty("log4j.appender.logstash",
"org.apache.log4j.ConsoleAppender")
p.setProperty("log4j.appender.logstash.Target", "System.err")
properties.setProperty("log4j.appender.logstash.Target", "System.err")
else
p.setProperty("log4j.appender.logstash",
properties.setProperty("log4j.appender.logstash",
"org.apache.log4j.FileAppender")
p.setProperty("log4j.appender.logstash.File", target)
properties.setProperty("log4j.appender.logstash.File", target.path)
end # case target
p.setProperty("log4j.appender.logstash.layout",
properties.setProperty("log4j.appender.logstash.layout",
"org.apache.log4j.PatternLayout")
p.setProperty("log4j.appender.logstash.layout.conversionPattern",
properties.setProperty("log4j.appender.logstash.layout.conversionPattern",
"log4j, [%d{yyyy-MM-dd}T%d{HH:mm:ss.SSS}] %5p: %c: %m%n")
org.apache.log4j.LogManager.resetConfiguration
org.apache.log4j.PropertyConfigurator.configure(p)
org.apache.log4j.PropertyConfigurator.configure(properties)
logger.debug("log4j java properties setup", :log4j_level => log4j_level)
end
end # class LogStash::Logger

View file

@ -25,7 +25,6 @@ class LogStash::Pipeline
begin
eval(code)
rescue => e
p e.backtrace[1]
raise
end

View file

@ -143,7 +143,6 @@ class LogStash::Plugin
return klass
rescue LoadError => e
puts e
raise LogStash::PluginLoadingError,
I18n.t("logstash.pipeline.plugin-loading-error", :type => type, :name => name, :path => path)
end # def load

View file

@ -1,2 +1,54 @@
# NetScreen firewall logs
NETSCREENSESSIONLOG %{SYSLOGTIMESTAMP:date} %{IPORHOST:device} %{IPORHOST}: NetScreen device_id=%{WORD:device_id}%{DATA}: start_time=%{QUOTEDSTRING:start_time} duration=%{INT:duration} policy_id=%{INT:policy_id} service=%{DATA:service} proto=%{INT:proto} src zone=%{WORD:src_zone} dst zone=%{WORD:dst_zone} action=%{WORD:action} sent=%{INT:sent} rcvd=%{INT:rcvd} src=%{IPORHOST:src_ip} dst=%{IPORHOST:dst_ip} src_port=%{INT:src_port} dst_port=%{INT:dst_port} src-xlated ip=%{IPORHOST:src_xlated_ip} port=%{INT:src_xlated_port} dst-xlated ip=%{IPORHOST:dst_xlated_ip} port=%{INT:dst_xlated_port} session_id=%{INT:session_id} reason=%{GREEDYDATA:reason}
#== Cisco ASA ==
CISCO_TAGGED_SYSLOG ^<%{POSINT:syslog_pri}>%{CISCOTIMESTAMP:timestamp}( %{SYSLOGHOST:sysloghost})?: %%{CISCOTAG:ciscotag}:
CISCOTIMESTAMP %{MONTH} +%{MONTHDAY}(?: %{YEAR})? %{TIME}
CISCOTAG [A-Z0-9]+-%{INT}-(?:[A-Z0-9_]+)
# ASA-2-106001
CISCOFW106001 (?<direction>Inbound) (?<protocol>TCP) connection (?<action>denied) from %{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port} flags %{GREEDYDATA:tcp_flags} on interface %{GREEDYDATA:interface}
# ASA-2-106006, ASA-2-106007, ASA-2-106010
CISCOFW106006_106007_106010 (?<action>Deny) (?<direction>inbound) %{WORD:protocol} (from|src) %{IP:src_ip}/%{INT:src_port}(\(%{DATA:src_fwuser}\))? (to|dst) %{IP:dst_ip}/%{INT:dst_port}(\(%{DATA:dst_fwuser}\))? (on interface %{DATA:interface}|due to (?<reason>DNS (Response|Query)))
# ASA-3-106014
CISCOFW106014 (?<action>Deny) (?<direction>inbound) (?<protocol>icmp) src %{DATA:src_interface}:%{IP:src_ip}(\(%{DATA:src_fwuser}\))? dst %{DATA:dst_interface}:%{IP:dst_ip}(\(%{DATA:dst_fwuser}\))? \(type %{INT:icmp_type}, code %{INT:icmp_code}\)
# ASA-6-106015
CISCOFW106015 (?<action>Deny) (?<protocol>TCP) \((?<policy_id>no connection)\) from %{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port} flags %{DATA:tcp_flags} on interface %{GREEDYDATA:interface}
# ASA-1-106021
CISCOFW106021 (?<action>Deny) %{WORD:protocol} reverse path check from %{IP:src_ip} to %{IP:dst_ip} on interface %{GREEDYDATA:interface}
# ASA-4-106023
CISCOFW106023 (?<action>Deny) (?<protocol>tcp|udp|icmp) src %{DATA:src_interface}:%{IP:src_ip}(/%{INT:src_port})?(\(%{DATA:src_fwuser}\))? dst %{DATA:dst_interface}:%{IP:dst_ip}(/%{INT:dst_port})?(\(%{DATA:dst_fwuser}\))?( \(type %{INT:icmp_type}, code %{INT:icmp_code}\))? by access-group %{DATA:policy_id} \[%{DATA:hashcode1}, %{DATA:hashcode2}\]
# ASA-5-106100
CISCOFW106100 access-list %{WORD:policy_id} %{WORD:action} %{WORD:protocol} %{DATA:src_interface}/%{IP:src_ip}\(%{INT:src_port}\)(\(%{DATA:src_fwuser}\))? -> %{DATA:dst_interface}/%{IP:dst_ip}\(%{INT:dst_port}\)(\(%{DATA:src_fwuser}\))? hit-cnt %{INT:hit_count} (?<interval>(first hit)|(%{INT}-second interval)) \[%{DATA:hashcode1}, %{DATA:hashcode2}\]
# ASA-6-110002
CISCOFW110002 (?<action>Failed to locate egress interface) for %{WORD:protocol} from %{DATA:src_interface}:%{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port}
# ASA-6-302010
CISCOFW302010 %{INT:connection_count} in use, %{INT:connection_count_max} most used
# ASA-6-302013, ASA-6-302014, ASA-6-302015, ASA-6-302016
CISCOFW302013_302014_302015_302016 (?<action>Built|Teardown)( (?<direction>inbound|outbound))? (?<protocol>TCP|UDP) connection %{INT:connection_id} for %{DATA:src_interface}:%{IP:src_ip}/%{INT:src_port}( \(%{IP:src_mapped_ip}/%{INT:src_mapped_port}\))?(\(%{DATA:src_fwuser}\))? to %{DATA:dst_interface}:%{IP:dst_ip}/%{INT:dst_port}( \(%{IP:dst_mapped_ip}/%{INT:dst_mapped_port}\))?(\(%{DATA:dst_fwuser}\))?( duration %{TIME:duration} bytes %{INT:bytes})?( (?<reason>%{WORD}( %{WORD})*))?( \(%{DATA:user}\))?
# ASA-6-302020, ASA-6-302021
CISCOFW302020_302021 (?<action>Built|Teardown)( (?<direction>inbound|outbound))? (?<protocol>ICMP) connection for faddr %{IP:dst_ip}/%{INT:icmp_seq_num}(?:\(%{DATA:fwuser}\))? gaddr %{IP:src_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:src_ip}/%{INT:icmp_code}( \(%{DATA:user}\))?
# ASA-6-305011
CISCOFW305011 (?<action>Built|Teardown) (?<xlate_type>static|dynamic) (?<protocol>TCP|UDP|ICMP) translation from %{DATA:src_interface}:%{IP:src_ip}(/%{INT:src_port})?(\(%{DATA:src_fwuser}\))? to %{DATA:src_xlated_interface}:%{IP:src_xlated_ip}/%{DATA:src_xlated_port}
# ASA-3-313001, ASA-3-313004, ASA-3-313008
CISCOFW313001_313004_313008 (?<action>Denied) (?<protocol>ICMP(v6)?) type=%{INT:icmp_type}, code=%{INT:icmp_code} from %{IP:src_ip} on interface %{DATA:interface}( to %{IP:dst_ip})?
# ASA-4-313005
CISCOFW313005 (?<action>No matching connection) for ICMP error message: (?<err_protocol>icmp) src %{DATA:err_src_interface}:%{IP:err_src_ip}(\(%{DATA:err_src_fwuser}\))? dst %{DATA:err_dst_interface}:%{IP:err_dst_ip}(\(%{DATA:err_dst_fwuser}\))? \(type %{INT:err_icmp_type}, code %{INT:err_icmp_code}\) on %{DATA:interface} interface\. Original IP payload: %{WORD:protocol} src %{IP:orig_src_ip}/%{INT:orig_src_port}(\(%{DATA:orig_src_fwuser}\))? dst %{IP:orig_dst_ip}/%{INT:orig_dst_port}(\(%{DATA:orig_dst_fwuser}\))?
# ASA-4-402117
CISCOFW402117 (?<protocol>IPSEC): Received a non-IPSec packet \(protocol= %{WORD:orig_protocol}\) from %{IP:src_ip} to %{IP:dst_ip}
# ASA-4-402119
CISCOFW402119 (?<protocol>IPSEC): Received an %{WORD:orig_protocol} packet \(SPI= %{DATA:spi}, sequence number= %{DATA:seq_num}\) from %{IP:src_ip} \(user= %{DATA:user}\) to %{IP:dst_ip} that failed anti-replay checking
# ASA-4-419001
CISCOFW419001 (?<action>Dropping) (?<protocol>TCP) packet from %{DATA:src_interface}:%{IP:src_ip}/%{INT:src_port} to %{DATA:dst_interface}:%{IP:dst_ip}/%{INT:dst_port}, reason: %{GREEDYDATA:reason}
# ASA-4-419002
CISCOFW419002 (?<action>Duplicate (?<protocol>TCP) SYN) from %{DATA:src_interface}:%{IP:src_ip}/%{INT:src_port} to %{DATA:dst_interface}:%{IP:dst_ip}/%{INT:dst_port} with different initial sequence number
# ASA-4-500004
CISCOFW500004 (?<action>Invalid transport field) for protocol=%{WORD:protocol}, from %{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port}
# ASA-6-602303, ASA-6-602304
CISCOFW602303_602304 (?<protocol>IPSEC): An (?<direction>inbound|outbound) %{GREEDYDATA:tunnel_type} SA \(SPI= %{DATA:spi}\) between %{IP:src_ip} and %{IP:dst_ip} \(user= %{DATA:user}\) has been (?<action>created|deleted)
# ASA-7-710001, ASA-7-710002, ASA-7-710003, ASA-7-710005, ASA-7-710006
CISCOFW710001_710002_710003_710005_710006 %{WORD:protocol} (?:request|access) (?<action>requested|permitted|denied by ACL|discarded) from %{IP:src_ip}/%{INT:src_port} to %{DATA:dst_interface}:%{IP:dst_ip}/%{INT:dst_port}
# ASA-6-713172
CISCOFW713172 Group = %{GREEDYDATA:group}, IP = %{IP:src_ip}, Automatic NAT Detection Status:\s+Remote end\s*%{DATA:is_remote_natted}\s*behind a NAT device\s+This\s+end\s*%{DATA:is_local_natted}\s*behind a NAT device
# ASA-4-733100
CISCOFW733100 \[\s*(?<drop_type>[^\]]+)\] drop %{DATA:drop_rate_id} exceeded. Current burst rate is %{INT:drop_rate_current_burst} per second, max configured rate is %{INT:drop_rate_max_burst}; Current average rate is %{INT:drop_rate_current_avg} per second, max configured rate is %{INT:drop_rate_max_avg}; Cumulative total count is %{INT:drop_total_count}
#== End Cisco ASA ==

View file

@ -30,7 +30,7 @@ describe LogStash::Event do
insist { subject.sprintf("%{+%s}") } == "1356998400"
end
it "should report a time with %{+format} syntax" do
it "should report a time with %{+format} syntax", :if => RUBY_ENGINE == "jruby" do
insist { subject.sprintf("%{+YYYY}") } == "2013"
insist { subject.sprintf("%{+MM}") } == "01"
insist { subject.sprintf("%{+HH}") } == "00"

View file

@ -1,6 +1,6 @@
require "test_utils"
describe "fail2ban logs" do
describe "fail2ban logs", :if => RUBY_ENGINE == "jruby" do
extend LogStash::RSpec
# The logstash config goes here.

View file

@ -1,6 +1,6 @@
require "test_utils"
describe "receive graphite input" do
describe "receive graphite input", :if => RUBY_ENGINE == "jruby" do
extend LogStash::RSpec
# The logstash config goes here.

View file

@ -1,6 +1,6 @@
require "test_utils"
describe "apache common log format" do
describe "apache common log format", :if => RUBY_ENGINE == "jruby" do
extend LogStash::RSpec
# The logstash config goes here.

View file

@ -1,6 +1,6 @@
require "test_utils"
describe "parse syslog" do
describe "parse syslog", :if => RUBY_ENGINE == "jruby" do
extend LogStash::RSpec
config <<-'CONFIG'

View file

@ -1,6 +1,6 @@
require "test_utils"
describe "http dates" do
describe "http dates", :if => RUBY_ENGINE == "jruby" do
extend LogStash::RSpec
config <<-'CONFIG'