elasticsearch/docs/reference/text-analysis/analysis-pattern-tokenizer.md
Liam Thompson d90c6b76a2
[9.0] [docs] Prepare for docs-assembler (#125118) (#125339)
* [docs] Prepare for docs-assembler (#125118)

* reorg files for docs-assembler and create toc.yml files

* fix build error, add redirects

* only toc

* move images

(cherry picked from commit 9bcd59596d)

# Conflicts:
#	docs/reference/aggregations/search-aggregations-pipeline-bucket-script-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-cumulative-cardinality-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-cumulative-sum-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-derivative-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-extended-stats-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-max-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-min-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-percentiles-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-stats-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-sum-bucket-aggregation.md
#	docs/reference/query-languages/esql/esql-commands.md
#	docs/reference/query-languages/esql/esql-lookup-join.md
#	docs/reference/query-languages/esql/esql-process-data-with-dissect-grok.md
#	docs/reference/query-languages/images/esql-lookup-join.png
#	docs/reference/query-languages/toc.yml
#	docs/reference/search-connectors/es-connectors-run-from-docker.md
#	docs/reference/text-analysis/analysis-apostrophe-tokenfilter.md
#	docs/reference/toc.yml

* remove markers

---------

Co-authored-by: Colleen McGinnis <colleen.mcginnis@elastic.co>
2025-03-20 22:20:12 +02:00

3.4 KiB

navigation_title mapped_pages
Pattern
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-pattern-tokenizer.html

Pattern tokenizer [analysis-pattern-tokenizer]

The pattern tokenizer uses a regular expression to either split text into terms whenever it matches a word separator, or to capture matching text as terms.

The default pattern is \W+, which splits text whenever it encounters non-word characters.

::::{admonition} Beware of Pathological Regular Expressions :class: warning

The pattern tokenizer uses Java Regular Expressions.

A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly.

Read more about pathological regular expressions and how to avoid them.

::::

Example output [_example_output_15]

POST _analyze
{
  "tokenizer": "pattern",
  "text": "The foo_bar_size's default is 5."
}

The above sentence would produce the following terms:

[ The, foo_bar_size, s, default, is, 5 ]

Configuration [_configuration_16]

The pattern tokenizer accepts the following parameters:

pattern
A Java regular expression, defaults to \W+.
flags
Java regular expression flags. Flags should be pipe-separated, eg "CASE_INSENSITIVE|COMMENTS".
group
Which capture group to extract as tokens. Defaults to -1 (split).

Example configuration [_example_configuration_10]

In this example, we configure the pattern tokenizer to break text into tokens when it encounters commas:

PUT my-index-000001
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer": {
          "tokenizer": "my_tokenizer"
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "pattern",
          "pattern": ","
        }
      }
    }
  }
}

POST my-index-000001/_analyze
{
  "analyzer": "my_analyzer",
  "text": "comma,separated,values"
}

The above example produces the following terms:

[ comma, separated, values ]

In the next example, we configure the pattern tokenizer to capture values enclosed in double quotes (ignoring embedded escaped quotes \"). The regex itself looks like this:

"((?:\\"|[^"]|\\")*)"

And reads as follows:

  • A literal "

  • Start capturing:

    • A literal \" OR any character except "
    • Repeat until no more characters match
  • A literal closing "

When the pattern is specified in JSON, the " and \ characters need to be escaped, so the pattern ends up looking like:

\"((?:\\\\\"|[^\"]|\\\\\")+)\"
PUT my-index-000001
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer": {
          "tokenizer": "my_tokenizer"
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "pattern",
          "pattern": "\"((?:\\\\\"|[^\"]|\\\\\")+)\"",
          "group": 1
        }
      }
    }
  }
}

POST my-index-000001/_analyze
{
  "analyzer": "my_analyzer",
  "text": "\"value\", \"value with embedded \\\" quote\""
}

The above example produces the following two terms:

[ value, value with embedded \" quote ]