* Initial commit of match_phrase
* Add MatchPhraseQueryTests
* First pass at CSV specs
* Update docs/changelog/127661.yaml
* Refactor so MatchPhrase doesn't use all fulltext test cases, just text only
* Fix tests
* Add some CSV test cases
* Fix test
* Update changelog
* Update tests
* Comment out MATCH_PHRASE in search-functions Markdown
* Minor PR feedback
* PR feedback - refactor/consolidate code
* Add some more tests
* Fix some tests
* [CI] Auto commit changes from spotless
* Fix tests
* PR feedback - add tests, support boost and numeric data
* Revert "PR feedback - add tests, support boost and numeric data"
This reverts commit 4e7a699e3e.
* Apply testing/PR feedback outside numeric support only
* Regenerate docs
* Add negative test
* Update x-pack/plugin/esql/qa/testFixtures/src/main/resources/match-phrase-function.csv-spec
Co-authored-by: Carlos Delgado <6339205+carlosdelest@users.noreply.github.com>
* Update x-pack/plugin/esql/qa/testFixtures/src/main/resources/match-phrase-function.csv-spec
Co-authored-by: Carlos Delgado <6339205+carlosdelest@users.noreply.github.com>
* Update x-pack/plugin/esql/qa/testFixtures/src/main/resources/match-phrase-function.csv-spec
Co-authored-by: Carlos Delgado <6339205+carlosdelest@users.noreply.github.com>
* PR feedback
* Fix auto-commit error
* Regenerate docs
* Update x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/expression/function/fulltext/MatchPhrase.java
Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
* Remove non text field types
* Fake test data
* Remove tests that no longer should pass without ip/date/version support
* Put real data in score tests now that I was able to engineer a failure
* Realized the scoring test might be flakey because how it was written, updated
* PR feedback
* PR feedback
* [CI] Auto commit changes from spotless
* Add check to MatchPhrase tests
* Fix merge errors
* [CI] Auto commit changes from spotless
* Test generated docs
* Add additional verifier tests
---------
Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
Co-authored-by: Carlos Delgado <6339205+carlosdelest@users.noreply.github.com>
Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
Modifies TO_IP so it can handle leading `0`s in ipv4s. Here's how it
works now:
```
ROW ip = TO_IP("192.168.0.1") // OK!
ROW ip = TO_IP("192.168.010.1") // Fails
```
This adds
```
ROW ip = TO_IP("192.168.010.1", {"leading_zeros": "octal"})
ROW ip = TO_IP("192.168.010.1", {"leading_zeros": "decimal"})
```
We do this because there isn't a consensus on how to parse leading zeros
in ipv4s. The standard unix tools like `ping` and `ftp` interpret
leading zeros as octal. Java's built in ip parsing interprets them as
decimal. Because folks are using this for security rules we need to
support all the choices.
Closes#125460
This PR was originally focused on improving support for Kibana docs, in particular the missing operator docs, but it has expanded to cover a bunch of related things:
* Primarily the main work was to improve operators support. ESQL generated docs cover all functions and most operators for which their is a clear operator class and test class. However, some are built-in behaviour and need additional support. This PR adds more generated content for those operators.
* Various specific operators requested by Kibana: Cast & null-predicates, and in particular the addition of examples
* Two functions without examples: mv_append and to_date_nanos
* Many small visual document cleanups (spelling, grammar, capitalization, etc.)
* Initial support for `applies_to` for multi-version differentiation.
This last point requires more work, as it is not yet agreed on just how we want this to look. We'll probably need to do refinements in followup PR. Consider the version in this PR as a first step into how this could look.
Building on the work started in https://github.com/elastic/elasticsearch/pull/123904, we now want to auto-generate most of the small subfiles from the ES|QL functions unit tests.
This work also investigates any remaining discrepancies between the original asciidoc version and the new markdown, and tries to minimize differences so the docs do not look too different.
The kibana json and markdown files are moved to a new location, and the operator docs are a little more generated than before (although still largely manual).