adds read privilege to the kibana_system role for indexes associated with the Microsoft Defender Integrations.
Changes are necessary in order to support Security Solution bi-directional response actions
In Jackson 2.15 a maximum string length of 50k characters was
introduced. We worked around that by override the length to max int on
all parsers created by xcontent. Jackson 2.17 introduced a similar limit
on field names. This commit mimics the workaround for string length by
overriding the max name length to be unlimited.
relates #58952
* permit at+jwt typ header value in jwt access tokens
* Update docs/changelog/126687.yaml
* address review comments
* [CI] Auto commit changes from spotless
* update Type Validator tests for parser ignoring case
---------
Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
We are currently having to hold in heap big list of Double objects which can take big amounts of heap. With this change
we can reduce the heap usage by 7x.
* Revert endpoint creation validation for ELSER and E5
* Update docs/changelog/126792.yaml
* Revert start model deployment being in TransportPutInferenceModelAction
---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
When reading a string value from stdin the keystore add command
currently looks directly at stdin. However, stdin may also be consumed
while reading the keystore password. This commit changes the add command
to use the reader from the termainl instead of looking at stdin
directly.
closes#98115
We have observed many OOMs due to the memory required to inject chunked inference results for semantic_text fields. This PR uses coordinating indexing pressure to account for this memory usage. When indexing pressure memory usage exceeds the threshold set by indexing_pressure.memory.limit, chunked inference result injection will be suspended to prevent OOMs.
We had a silly bug in quantizing vectors in bbq where we were scaling
the initial quantile optimization parameters incorrectly given the
vector component distribution.
In distributions where this has a major impact, the recall results were
abysmal and rendered the quantization technique useless.
In modern, well distributed components, this change is almost a no-op.
The current LOOKUP JOIN docs include examples that are not tested by the ES|QL tests, unlike most other examples in the documentation. This PR fixes that, changing two examples to use existing tests, and adding a new csv-spec file for the remaining four examples. These four are not required to show results, so the tests have empty data and do not require any results. This means we are testing only the syntax (parsing and semantic analysis), which is sufficient for the docs.
* ES|QL change point docs
* Move ES|QL change_point to tech preview
* Update docs/reference/query-languages/esql/esql-commands.md
Co-authored-by: Craig Taverner <craig@amanzi.com>
* different example + add it the csv tests
* Restructure change_point docs to new structure
* Added generated test examples to change_point docs
* Fixed a few README.md text mistakes and added more details
* fix grammar
* License check
* regen parser
* Update docs/reference/query-languages/esql/_snippets/commands/layout/change_point.md
Co-authored-by: Craig Taverner <craig@amanzi.com>
---------
Co-authored-by: Craig Taverner <craig@amanzi.com>
This fixes an issue where if a Painless getter method return type
didn't match a Java getter method return type we add a cast.
Currentlythis is adding an extraneous cast.
Closes: #70682
Modifies TO_IP so it can handle leading `0`s in ipv4s. Here's how it
works now:
```
ROW ip = TO_IP("192.168.0.1") // OK!
ROW ip = TO_IP("192.168.010.1") // Fails
```
This adds
```
ROW ip = TO_IP("192.168.010.1", {"leading_zeros": "octal"})
ROW ip = TO_IP("192.168.010.1", {"leading_zeros": "decimal"})
```
We do this because there isn't a consensus on how to parse leading zeros
in ipv4s. The standard unix tools like `ping` and `ftp` interpret
leading zeros as octal. Java's built in ip parsing interprets them as
decimal. Because folks are using this for security rules we need to
support all the choices.
Closes#125460
Today we rely on registering the channel after registering the task to
be cancelled to ensure that the task is cancelled even if the channel is
closed concurrently. However the client may already have processed a
cancellable request on the channel and therefore this mechanism doesn't
work. With this change we make sure not to register another task after
draining the registrations in order to cancel them.
Closes#88201
This splits the grouping functions in two: those that can be evaluated independently through the EVAL operator (`BUCKET`) and those that don't (like those that that are evaluated through an agg operator, `CATEGORIZE`).
Closes#124608
Adds heuristics to pick an efficient partitioning strategy based on the
index and rewritten query. This speeds up some queries by throwing more
cores at the problem:
```
FROM test | STATS SUM(b)
Before: took: 31 CPU: 222.3%
After: took: 15 CPU: 806.9%
```
It also lowers the overhead of simpler queries by throwing less cores at
the problem when it won't really speed anything up:
```
FROM test
Before: took: 1 CPU: 48.5%
After: took: 1 CPU: 70.4%
```
We have had a `pragma` to control our data partitioning for a long time,
this just looks at the query to pick a partitioning scheme. The
partitioning options:
* `shard`: use one core per shard
* `segment`: use one core per large segment
* `doc`: break each shard into as many segments as there are cores
`doc` is the fastest, but has a lot of overhead, especially for complex
Lucene queries. `segment` is fast, but doesn't make the most out of CPUs
when there are few segments. `shard` has the lowest overhead.
Previously we always used `segment` partitioning because it doesn't have
the terrible overhead but is fast. With this change we use `doc` when
the top level query matches all documents - those have very very low
overhead even in the `doc` partitioning. That's the twice as fast
example above.
This also uses the `shard` partitioning for queries that don't have to
do much work like `FROM foo` or `FROM foo | LIMIT 1` or
`FROM foo | SORT a`. That's the lower CPU example above.
This forking choice is taken very late on the data node. So queries like
this:
```
FROM test | WHERE @timestamp > "2025-01-01T00:00:00Z" | STATS SUM(b)
```
can also use the `doc` partitioning when all documents are after the
timestamp and all documents have `b`.
This PR fixes#119950 where an `IN` query includes `NULL` values with non-NULL `DataType` appearing within the query range. An expression is considered `NULL` when its `DataType` is `NULL` or it is a `Literal` with a value of `null`.
If updating the `index.time_series.end_time` fails for one data stream,
then UpdateTimeSeriesRangeService should continue updating this setting for other data streams.
The following error was observed in the wild:
```
[2025-04-07T08:50:39,698][WARN ][o.e.d.UpdateTimeSeriesRangeService] [node-01] failed to update tsdb data stream end times
java.lang.IllegalArgumentException: [index.time_series.end_time] requires [index.mode=time_series]
at org.elasticsearch.index.IndexSettings$1.validate(IndexSettings.java:636) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.index.IndexSettings$1.validate(IndexSettings.java:619) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.settings.Setting.get(Setting.java:563) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.settings.Setting.get(Setting.java:535) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.datastreams.UpdateTimeSeriesRangeService.updateTimeSeriesTemporalRange(UpdateTimeSeriesRangeService.java:111) ~[?:?]
at org.elasticsearch.datastreams.UpdateTimeSeriesRangeService$UpdateTimeSeriesExecutor.execute(UpdateTimeSeriesRangeService.java:210) ~[?:?]
at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.3.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
at java.lang.Thread.run(Thread.java:1575) ~[?:?]
```
Which resulted in a situation, that causes the `index.time_series.end_time` index setting not being updated for any data stream. This then caused data loss as metrics couldn't be indexed, because no suitable backing index could be resolved:
```
the document timestamp [2025-03-26T15:26:10.000Z] is outside of ranges of currently writable indices [[2025-01-31T07:22:43.000Z,2025-02-15T07:24:06.000Z][2025-02-15T07:24:06.000Z,2025-03-02T07:34:07.000Z][2025-03-02T07:34:07.000Z,2025-03-10T12:45:37.000Z][2025-03-10T12:45:37.000Z,2025-03-10T14:30:37.000Z][2025-03-10T14:30:37.000Z,2025-03-25T12:50:40.000Z][2025-03-25T12:50:40.000Z,2025-03-25T14:35:40.000Z
```
The `indexNameSupplier` was included in the equality and is of type
`BiFunction`, which doesn't implement a proper `equals` method by
default - and thus neither do the lambdas. This meant that two instances
of this step would only be considered equal if they were the same
instance. By excluding `indexNameSupplier` from the `equals` method, we
ensure the method works as intended and is able to properly tell the
equality between two instances.
As a side effect, we expect/hope this change will fix a number of tests
that were failing because `WaitForIndexColorStep` missed the last
cluster state update in the test, causing ILM to get stuck and the test
to time out.
Fixes#125683Fixes#125789Fixes#125867Fixes#125911Fixes#126053Fixes#126354
While the internal structure of the docs is already split into many (over 1000) sub-pages, the final display for the `Functions and Operators` page is a single giant page, making navigation harder. This PR splits it into separate pages, one for each group of similar functions and one for the operators. Twelve new pages.
This PR also bundles a few other related changes. In total what is done is:
* Split functions/operators into 12 pages, one for each group, maintaining the existing split of each function/operator into a snippet with dynamically generated examples
* Split esql-commands.md into source-commands.md and processing-commands.md, each of which is split into individual snippets, one for each command
* Each command snippet has it's examples split out into separate files, if they were examples that were dynamically generated in the older asciidoc system
* The examples files are overwritten by the ES|QL unit tests, using a similar mechanism to the examples written for functions and operators)
* Some additional refinements to the Kibana definition and markdown files (nicer operator headings, and display text)
`Lucene.EMPTY_TOP_DOCS` to identify empty to docs results. These were previously
null results, but did not need to be send over transport as incremental reduction
was performed only on the data node.
Now it can happen that the coord node received a merge result with empty top docs,
which has nothing interesting for merging, but that can lead to an exception because
the type of the empty array does not match the type of other shards results, for
instance if the query was sorted by field. To resolve this, we filter out empty
top docs results before merging.
Closes#126118
In mustache, this change returns null values which convert to empty strings
instead of throwing an exception when users have a template with
something like a.8 where the index 8 is out of bounds. This matches the
behavior for non-existent keys like a.d.
Closes#55200