Moving towards grouping of data types in the field caps API
the internal data type `DATETIME_NANOS` introduced for `date_nanos`
support is eliminated.
Relates: #67722
Follows: #67666
* Integrate "fields" API into QL (#68467)
* QL: retry SQL and EQL requests in a mixed-node (rolling upgrade) cluster (#68602)
* Adapt nested fields extraction from "fields" API output to the new un-flattened structure (#68745)
Fixed the inconsistencies regarding NULL argument handling.
NULL literal vs NULL field value as function arguments in some case
resulted in different function return values.
Functions should return with the same value no matter if the argument(s)
came from a field or from a literal.
The introduced integration test tests if function calls with same
argument values (regardless of literal/field) will return with the
same output (also checks if newly added functions are added to the
testcases).
Fixed the following functions:
* Insert: NULL start, length and replacement arguments (as fields) also
result in NULL return value instead of returning the input.
* Locate: NULL pattern results in NULL return value, NULL optional start
argument handled the same as missing start argument
* Replace: NULL pattern and replacement results in NULL instead of
returning the input
* Substring: NULL start or length results in NULL instead of returning
the input
Fixes#58907
Use an internal new DataType DATETIME_NANOS which is not exposed
and therefore cannot be used for CASTing. DATETIME is used instead
and the precision of both DATETIME and TIME has been promoted from
3 to 9, providing transparency to all datetime functionality regardless
of millis or nanos precision.
Moreover, CURRENT_TIMESTAMP/CURRENT_TIME can now return precision up
to 6 fractional digits of a second with the use of Clock.
Closes: #38562
Co-authored-by: Bogdan Pintea <bogdan.pintea@elastic.co>
SQL: Implement the TO_CHAR() function
* The implementation is according to PostgreSQL 13 specs:
https://www.postgresql.org/docs/13/functions-formatting.html
* Tested against actual output from PostgreSQL 13 using randomized inputs
* All the Postgres formats are supported, there is also partial supports
for the modifiers (`FM` and `TH` are supported)
* Random unit test data generator script in case we need to upgrade the
formatter in the future
* Documentation
* Integration tests
Co-authored-by: Michał Wąsowicz <mwasowicz7@gmail.com>
Co-authored-by: Andras Palinkas <andras.palinkas@elastic.co>
In case the local agg sorter queue gets full and no limit has been provided,
the local sorter will now erroneously call the failure callback for every
single row in the original rowset that's left over the local queue limit
(instead for just the first one). The failure response is dispatched in any
case, so this is relatively harmless. The sorter continues iterating on the
original response fetching subsequent pages. In case of correct Elasticsearch
behaviour, this is also harmless, it'll just trigger a number of internal
exceptions. However, in case of a pagination defect in Elasticsearch (like
GH#65685, where the same search_after is returned), this will result in an
effective spin loop, potentially rendering eventually the node unresponsive.
This PR simply breaks both the inner loop iterating over the current unsorted
rowset, as well as the outer one, iterating over the left pages.
It also fixes an outdated documentation limitation.
* Adds the capability to have functions with two optional arguments
* Adds two new optional arguments to `PERCENTILE()` and
`PERCENTILE_RANK()` functions, namely the method and
method_parameter which can be: 1) `tdigest` and a double `compression`
parameter or 2) `hdr` and an integer representing the
`number_of_digits` parameter.
* Integration tests
* Documentation updates
Closes#63567
* Remove constant_keyword from SQL docs
`constant_keyword` removed as distinct type from SQL in #60524.
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
For a query like `SELECT name FROM test WHERE name LIKE ''%c*'` ES SQL
generates an error. `*` is not a special character in a `LIKE` construct
and it's expected to not needing to be escaped, so the previous query
should work as is.
In the LIKE pattern any `*` character was treated as invalid character
and the usage of `%` or `_` was suggested instead. But `*` is a valid,
acceptable non-wildcard on the right side of the `LIKE` operator.
Fix: #55108
* Update docs on Tableau Desktop integration
Update the docs on how to integrate with Tableau Desktop, now using the
dedicated connector in conjunction with the JDBC driver.
* Add docs for connecting with Tableau Server
Add the steps required to connecto to Elasticsearch for Tableau Server.
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
Plugin discovery documentation contained information about installing
Elasticsearch 2.0 and installing an oracle JDK, both of which is no
longer valid.
While noticing that the instructions used cleartext HTTP to install
packages, this commit replaces HTTPs links instead of HTTP where possible.
In addition a few community links have been removed, as they do not seem
to exist anymore.
* Add option to provide the delimiter to the CSV fmt
This adds the option to provide the desired character as the separator
for the CSV format (the default remains comma).
A set of characters are excluded though - like CR, LF, `"` - to avoid
slipping onto the CSV-dialects slope. The tab is also forbidden, the
user needs to choose the "tsv" format explicitely.
Update the doc to make it clear that the textual CSV, TSV and TXT
formats pass the cursor back to the user through the Cursor HTTP header.
Implement DATE_PARSE(<date_str>, <pattern_str>) function
which allows to parse a date string according to the specified
pattern into a date object. The patterns allowed are those of
java.time.format.DateTimeFormatter.
Closes#54962
Co-authored-by: Marios Trivyzas <matriv@users.noreply.github.com>
* Fix: preserve URI query and fragment char escaping
This commit fixes an issue emerging when the connection string URI
contains escaped characters.
The original URI is pre-parsed in order to re-assemble a new URI having
the optional elements filled in with defaults. The new URI has been
using however the unescaped query and fragment parts. So if these
contained any escaped `&` or `=` (such as in the password option value),
the unescaping would reveal them and make them later interfere with the
options parsing.
The commit changes that, so that the new URI be built from the unescaped
"raw" parts of the original URI.
TIME_PARSE works correctly if both date and time parts are specified,
and a TIME object (that contains only time is returned).
Adjust docs and add a unit test that validates the behavior.
Follows: #55223
Add basic support for `TOP X` as a synonym to LIMIT X which is used
by [MS-SQL server](https://docs.microsoft.com/en-us/sql/t-sql/queries/top-transact-sql?view=sql-server-ver15),
e.g.:
```
SELECT TOP 5 a, b, c FROM test
```
TOP in SQL server also supports the `PERCENTAGE` and `WITH TIES`
keywords which this implementation doesn't.
Don't allow usage of both TOP and LIMIT in the same query.
Refers to #41195
Implement TIME_PARSE(<time_str>, <pattern_str>) function
which allows to parse a time string according to the specified
pattern into a time object. The patterns allowed are those of
java.time.format.DateTimeFormatter.
Closes#54963
Co-authored-by: Andrei Stefan <astefan@users.noreply.github.com>
Co-authored-by: Marios Trivyzas <matriv@users.noreply.github.com>
Move the JDBC functionality integration tests from `:sql:qa` to a separate
module `:sql:qa:jdbc`. This way the tests are isolated from the rest of the
integration tests and they only depend to the `:sql:jdbc` module, thus
removing the danger of accidentally pulling in some dependency that may
hide bugs.
Moreover this is a preparation for #56722, so that we can run those tests
between different JDBC and ES node versions and ensure forward
compatibility.
Move the rest of existing tests inside a new `:sql:qa:server` project, so that
the `:sql:qa` becomes the parent project for both and one can run all the integration
tests by using this parent project.
The docs pattern url was using `*` which means zero or many instead
of `?` which means zero or one. The pattern url returned in error
messages was not in sync with the one in the docs.
Fixes: #56476
* * StartsWith is case sensitive aware
* Added case sensitivity to EQL configuration
* case_sensitive parameter can be specified when running queries (default
is case insensitive)
* Added STARTS_WITH function to SQL as well
* Add case sensitive aware queryfolder tests
* Address reviews
* Address review #2
Previously, when the timezone was missing from the datetime string
and the pattern, UTC was used, instead of the session defined timezone.
Moreover, if a timezone was included in the datetime string and the
pattern then this timezone was used. To have a consistent behaviour
the resulting datetime will always be converted to the session defined
timezone, e.g.:
```
SELECT DATETIME_PARSE('2020-05-04 10:20:30.123 +02:00', 'HH:mm:ss dd/MM/uuuu VV') AS datetime;
```
with `time_zone` set to `-03:00` will result in
```
2020-05-04T05:20:40.123-03:00
```
Follows: #54960
Implement the use of scalar functions inside aggregate functions.
This allows for complex expressions inside aggregations, with or without
GROUBY as well as with or without a HAVING clause. e.g.:
```
SELECT MAX(CASE WHEN a IS NULL then -1 ELSE abs(a * 10) + 1 END) AS max, b
FROM test
GROUP BY b
HAVING MAX(CASE WHEN a IS NULL then -1 ELSE abs(a * 10) + 1 END) > 5
```
Scalar functions are still not allowed for `KURTOSIS` and `SKEWNESS` as
this is currently not implemented on the ElasticSearch side.
Fixes: #29980Fixes: #36865Fixes: #37271
Implement DATETIME_PARSE(<datetime_str>, <pattern_str>) function
which allows to parse a datetime string according to the specified
pattern into a datetime object. The patterns allowed are those of
java.time.format.DateTimeFormatter.
Relates to #53714
Implement DATETIME_FORMAT(<date/datetime/time>, ) function
which allows for formatting a timestamp to the specified format. The
patterns allowed as those of java.time.format.DateTimeFormatter.
Related to #53714
* Document VarcharLimit and EarlyExecution params
Add the documentation for the newly added VarcharLimit and
EarlyExecution DSN attributes.
* Remove obsolete VersionChecking param
This param had been removed already along the #53082 work.
* Update docs/reference/sql/endpoints/odbc/configuration.asciidoc
fix typo
Co-Authored-By: Stuart Cam <stuart@codebrain.co.uk>
* Update docs/reference/sql/endpoints/odbc/configuration.asciidoc
fix typo
Co-Authored-By: Stuart Cam <stuart@codebrain.co.uk>
Per the [Asciidoctor docs][0], Asciidoctor replaces the following
syntax with double arrows in the rendered HTML:
* => renders as ⇒
* <= renders as ⇐
This escapes several unintended replacements, such as in the Painless
docs.
Where appropriate, it also replaces some double arrow instances with
single arrows for consistency.
[0]: https://asciidoctor.org/docs/user-manual/#replacements
* Refresh snapshots with latest look
Add new snapshots with the connection editor to reflect the latest UI.
* Document the effect of the late added params
Add details about the Cloud ID setting, as well as those on the Misc
tab.