[DOCS] Fixed broken links (#7573)

This commit is contained in:
Lisa Cawley 2017-06-29 12:36:38 -07:00 committed by GitHub
parent cebc191e55
commit 524b92e12b
6 changed files with 47 additions and 50 deletions

View file

@ -224,8 +224,8 @@ Referrer URL:: `referrer`
User agent:: `agent`
TIP: If you need help building grok patterns, try out the
{kibana-ref}xpack-grokdebugger.html[Grok Debugger]. The Grok Debugger is an
{xpack} feature under the Basic License and is therefore *free to use*.
{kibana-ref}/xpack-grokdebugger.html[Grok Debugger]. The Grok Debugger is an
{xpack} feature under the Basic License and is therefore *free to use*.
Edit the `first-pipeline.conf` file and replace the entire `filter` section with the following text:

View file

@ -15,7 +15,7 @@ These changes can impact any instance of Logstash and are plugin agnostic, but o
Logstash 5.0 introduces a new way to <<logstash-settings-file, configure application settings>> for Logstash through a
`logstash.yml` file.
This file is typically located in `${LS_HOME}/config`, or `/etc/logstash` when installed via packages. Logstash will not be
This file is typically located in `${LS_HOME}/config`, or `/etc/logstash` when installed via packages. Logstash will not be
able to start without this file, so please make sure to pass in `--path.settings` if you are starting Logstash manually
after installing it via a package (RPM, DEB).
@ -27,8 +27,8 @@ bin/logstash --path.settings /path/to/logstash.yml
[float]
==== URL Changes for DEB/RPM Packages
The previous `packages.elastic.co` URL has been altered to `artifacts.elastic.co`.
Ensure you update your repository files before running the upgrade process, or
The previous `packages.elastic.co` URL has been altered to `artifacts.elastic.co`.
Ensure you update your repository files before running the upgrade process, or
your operating system may not see the new packages.
[float]
@ -43,7 +43,7 @@ install binaries. Previously it used to install in `/opt/logstash` directory. Th
|Logstash 2.x
|`/opt/logstash`
|`/opt/logstash`
|Logstash 5.0
|Logstash 5.0
|`/usr/share/logstash`
|`/usr/share/logstash`
|===
@ -147,8 +147,8 @@ NOTE: None of the short form options have changed!
[float]
==== RSpec testing script
The `rspec` script is no longer bundled with Logstash release artifacts. This script has been used previously to
run unit tests for validating Logstash configurations. While this was useful to some users, this mechanism assumed that Logstash users
The `rspec` script is no longer bundled with Logstash release artifacts. This script has been used previously to
run unit tests for validating Logstash configurations. While this was useful to some users, this mechanism assumed that Logstash users
were familiar with the RSpec framework, which is a Ruby testing framework.
@ -173,14 +173,14 @@ importantly, the subfield for string multi-fields has changed from `.raw` to `.k
behavior. The impact of this change to various user groups is detailed below:
** New Logstash 5.0 and Elasticsearch 5.0 users: Multi-fields (often called sub-fields) use `.keyword` from the
outset. In Kibana, you can use `my_field.keyword` to perform aggregations against text-based fields, in the same way that it
outset. In Kibana, you can use `my_field.keyword` to perform aggregations against text-based fields, in the same way that it
used to be `my_field.raw`.
** Existing users with custom templates: Using a custom template means that you control the template completely, and our
** Existing users with custom templates: Using a custom template means that you control the template completely, and our
template changes do not impact you.
** Existing users with default template: Logstash does not force you to upgrade templates if one already exists. If you
intend to move to the new template and want to use `.keyword`, you will most likely want to reindex existing data so that it
also uses the `.keyword` field, unless you are able to transition from `.raw` to `.keyword`. Elasticsearch's
{ref}docs-reindex.html[reindexing API] can help move your data from using `.raw` subfields to `.keyword`, thereby avoiding any
{ref}/docs-reindex.html[reindexing API] can help move your data from using `.raw` subfields to `.keyword`, thereby avoiding any
transition time. You _can_ use a custom template to get both `.raw` and `.keyword` so that you can wait until all `.raw` data
has stopped existing before transitioning to only using `.keyword`; this will waste some storage space and memory, but it does
help users to avoid having to relearn operations.
@ -189,15 +189,15 @@ help users to avoid having to relearn operations.
[[plugin-versions]]
==== Plugin Versions
Logstash is unique amongst the Elastic Stack with respect to its plugins. Unlike Elasticsearch and Kibana, which both
Logstash is unique amongst the Elastic Stack with respect to its plugins. Unlike Elasticsearch and Kibana, which both
require plugins to be targeted to a specific release, Logstashs plugin ecosystem provides more flexibility so that it can
support outside ecosystems _within the same release_. Unfortunately,
support outside ecosystems _within the same release_. Unfortunately,
that flexibility can cause issues when handling upgrades.
Non-standard plugins must always be checked for compatibility, but some bundled plugins are upgraded in order to remain
Non-standard plugins must always be checked for compatibility, but some bundled plugins are upgraded in order to remain
compatible with the tools or frameworks that they use for communication. For example, the
<<plugins-inputs-kafka, Kafka Input>> and <<plugins-outputs-kafka, Kafka Output>> plugins serve as a primary example of
such compatibility changes. The latest version of the Kafka plugins is only compatible with Kafka 0.10, but as the
<<plugins-inputs-kafka, Kafka Input>> and <<plugins-outputs-kafka, Kafka Output>> plugins serve as a primary example of
such compatibility changes. The latest version of the Kafka plugins is only compatible with Kafka 0.10, but as the
compatibility matrices show: earlier plugin versions are required for earlier versions of Kafka (e.g., Kafka 0.9).
Automatic upgrades generally lead to improved features and support, but network layer changes like those above may make part
@ -221,8 +221,8 @@ The version numbers were found by checking the compatibility matrix for the indi
[float]
==== Kafka Input Configuration Changes
As described in the section <<plugin-versions, above>>, the Kafka plugin has been updated to bring in new consumer features.
In addition, to the plugin being incompatible with 0.8.x version of the Kafka broker, _most_ of the config options have
As described in the section <<plugin-versions, above>>, the Kafka plugin has been updated to bring in new consumer features.
In addition, to the plugin being incompatible with 0.8.x version of the Kafka broker, _most_ of the config options have
been changed to match the new consumer configurations from the Kafka Java consumer. Here's a list of important config options that have changed:
* `topic_id` is renamed to `topics` and accepts an array of topics to consume from.
@ -242,7 +242,7 @@ where `path.data` is the path defined in the new `logstash.yml` file.
| |Default `sincedb_path`
|Logstash 2.x
|`$HOME/.sincedb*`
|Logstash 5.0
|Logstash 5.0
|`<path.data>/plugins/inputs/file`
|===
@ -253,8 +253,8 @@ then it must be copied over to `path.data` manually to use the save state (or th
[float]
==== GeoIP Filter
The GeoIP filter has been updated to use MaxMind's GeoIP2 database. Previous GeoIP version is now considered legacy
by MaxMind. As a result of this, `.dat` version files are no longer supported, and only `.mmdb` format is supported.
The GeoIP filter has been updated to use MaxMind's GeoIP2 database. Previous GeoIP version is now considered legacy
by MaxMind. As a result of this, `.dat` version files are no longer supported, and only `.mmdb` format is supported.
The new database will not include ASN data in the basic free database file.
Previously, when the filter encountered an IP address for which there were no results in the database, the event
@ -266,9 +266,9 @@ which then drops the tag from the event.
[float]
=== Ruby Filter and Custom Plugin Developers
With the migration to the new <<event-api>>, we have changed how you can access internal data compared to previous release.
With the migration to the new <<event-api>>, we have changed how you can access internal data compared to previous release.
The `event` object no longer returns a reference to the data. Instead, it returns a copy. This might change how you perform
manipulation of your data, especially when working with nested hashes. When working with nested hashes, its recommended that
manipulation of your data, especially when working with nested hashes. When working with nested hashes, its recommended that
you use the <<logstash-config-field-references, `field reference` syntax>> instead of using multiple square brackets.
As part of this change, Logstash has introduced new Getter/Setter APIs for accessing information in the `event` object.

View file

@ -249,7 +249,7 @@ Example:
==== String
A string must be a single character sequence. Note that string values are
enclosed in quotes, either double or single.
enclosed in quotes, either double or single.
===== Escape Sequences
@ -668,7 +668,7 @@ environment variable is undefined.
The following examples show you how to use environment variables to set the values of some commonly used
configuration options.
===== Setting the TCP Port
===== Setting the TCP Port
Here's an example that uses an environment variable to set the TCP port:
@ -688,7 +688,7 @@ Now let's set the value of `TCP_PORT`:
export TCP_PORT=12345
----
At startup, Logstash uses the following configuration:
At startup, Logstash uses the following configuration:
[source,ruby]
----------------------------------
@ -701,7 +701,7 @@ input {
If the `TCP_PORT` environment variable is not set, Logstash returns a configuration error.
You can fix this problem by specifying a default value:
You can fix this problem by specifying a default value:
[source,ruby]
----
@ -723,7 +723,7 @@ input {
}
----
If the environment variable is defined, Logstash uses the value specified for the variable instead of the default.
If the environment variable is defined, Logstash uses the value specified for the variable instead of the default.
===== Setting the Value of a Tag
@ -745,7 +745,7 @@ Let's set the value of `ENV_TAG`:
export ENV_TAG="tag2"
----
At startup, Logstash uses the following configuration:
At startup, Logstash uses the following configuration:
[source,ruby]
----
@ -778,7 +778,7 @@ Let's set the value of `HOME`:
export HOME="/path"
----
At startup, Logstash uses the following configuration:
At startup, Logstash uses the following configuration:
[source,ruby]
----
@ -797,7 +797,7 @@ filter {
The following examples illustrate how you can configure Logstash to filter events, process Apache logs and syslog messages, and use conditionals to control what events are processed by a filter or output.
TIP: If you need help building grok patterns, try out the
{kibana-ref}xpack-grokdebugger.html[Grok Debugger]. The Grok Debugger is an
{kibana-ref}/xpack-grokdebugger.html[Grok Debugger]. The Grok Debugger is an
{xpack} feature under the Basic License and is therefore *free to use*.
[float]

View file

@ -23,7 +23,7 @@ enable you to quickly collect, parse, and index popular log types and view
pre-built Kibana dashboards within minutes.
{metricbeat}metricbeat-modules.html[Metricbeat Modules] provide a similar
experience, but with metrics data. In this context, Beats will ship data
directly to Elasticsearch where {ref}ingest.html[Ingest Nodes] will process
directly to Elasticsearch where {ref}/ingest.html[Ingest Nodes] will process
and index your data.
image::static/images/deploy1.png[]
@ -120,11 +120,11 @@ Enterprise-grade security is available across the entire delivery chain.
* Wire encryption is recommended for both the transport from
{filebeat}configuring-ssl-logstash.html[Beats to Logstash] and from
{xpack-ref}logstash.html[Logstash to Elasticsearch].
{xpack-ref}/logstash.html[Logstash to Elasticsearch].
* Theres a wealth of security options when communicating with Elasticsearch
including basic authentication, TLS, PKI, LDAP, AD, and other custom realms.
To enable Elasticsearch security, consult the
{xpack-ref}xpack-security.html[X-Pack documentation].
{xpack-ref}/xpack-security.html[X-Pack documentation].
[float]
==== Monitoring

View file

@ -38,5 +38,5 @@ Then you can drill down to see stats about a specific node:
image::static/images/nodestats.png[Logstash monitoring node stats dashboard in Kibana]
See the {xpack-ref}monitoring-logstash.html[Logstash monitoring documentation] to learn
See the {xpack-ref}/monitoring-logstash.html[Logstash monitoring documentation] to learn
how to set up and use this feature.

View file

@ -33,7 +33,7 @@ filter {
date {
match => [ "logdate", "MMM dd yyyy HH:mm:ss" ]
}
}
}
--------------------------------------------------------------------------------
@ -167,7 +167,7 @@ filter {
--------------------------------------------------------------------------------
<<plugins-codecs-fluent,fluent codec>>::
Reads the Fluentd `msgpack` schema.
+
The following config decodes logs received from `fluent-logger-ruby`:
@ -353,8 +353,8 @@ After the filter is applied, the event in the example will have these fields:
* `duration: 0.043`
TIP: If you need help building grok patterns, try out the
{kibana-ref}xpack-grokdebugger.html[Grok Debugger]. The Grok Debugger is an
{xpack} feature under the Basic License and is therefore *free to use*.
{kibana-ref}/xpack-grokdebugger.html[Grok Debugger]. The Grok Debugger is an
{xpack} feature under the Basic License and is therefore *free to use*.
[[lookup-enrichment]]
=== Enriching Data with Lookups
@ -379,10 +379,10 @@ filter {
}
--------------------------------------------------------------------------------
<<plugins-filters-elasticsearch,elasticsearch>>::
Copies fields from previous log events in Elasticsearch to current events.
Copies fields from previous log events in Elasticsearch to current events.
+
The following config shows a complete example of how this filter might
be used. Whenever Logstash receives an "end" event, it uses this Elasticsearch
@ -413,7 +413,7 @@ between the two events.
<<plugins-filters-geoip,geoip filter>>::
Adds geographical information about the location of IP addresses. For example:
Adds geographical information about the location of IP addresses. For example:
+
[source,json]
--------------------------------------------------------------------------------
@ -423,7 +423,7 @@ filter {
}
}
--------------------------------------------------------------------------------
+
+
After the geoip filter is applied, the event will be enriched with geoip fields.
For example:
+
@ -493,7 +493,7 @@ filter {
"200" => "OK"
"403" => "Forbidden"
"404" => "Not Found"
"408" => "Request Timeout"
"408" => "Request Timeout"
}
remove_field => "response_code"
}
@ -507,7 +507,7 @@ Parses user agent strings into fields.
+
The following example takes the user agent string in the `agent` field, parses
it into user agent fields, and adds the user agent fields to a new field called
`user_agent`. It also removes the original `agent` field:
`user_agent`. It also removes the original `agent` field:
+
[source,json]
--------------------------------------------------------------------------------
@ -519,7 +519,7 @@ filter {
}
}
--------------------------------------------------------------------------------
+
+
After the filter is applied, the event will be enriched with user agent fields.
For example:
+
@ -535,7 +535,4 @@ For example:
"os_name": "Mac OS X",
"device": "Other"
}
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------