This updates the gradle wrapper to 8.12
We addressed deprecation warnings due to the update that includes:
- Fix change in TestOutputEvent api
- Fix deprecation in groovy syntax
- Use latest ospackage plugin containing our fix
- Remove project usages at execution time
- Fix deprecated project references in repository-old-versions
Remove to, from, include_lower, include_upper range query params.
These params have been removed from our documentation in v. 0.90.4 (d6ecdec),
and got deprecated in 8.16 in #113286.
The libs projects are configured to all begin with `elasticsearch-`.
While this is desireable for the artifacts to contain this consistent
prefix, it means the project names don't match up with their
directories. Additionally, it creates complexities for subproject naming
that must be manually adjusted.
This commit adjusts the project names for those under libs to be their
directory names. The resulting artifacts for these libs are kept the
same, all beginning with `elasticsearch-`.
Deprecate to, from, include_lower, include_upper range query params.
These params have been removed from our documentation in v. 0.90.4 (d6ecdecc19),
but did not got through deprecation cycle.
These params to be removed in v9.0.
Related to #81276Closes#48538
JDK 23 removes the COMPAT locale provider, leaving CLDR as the only option. This commit configures Elasticsearch
to use the CLDR provider when on JDK 23, but still use the existing COMPAT provider when on JDK 22 and below.
This causes some differences in locale behaviour; this also adapts various tests to still work whether run on COMPAT or CLDR.
We use `ThreadPool#relativeTimeInMillis` as part of a timeout mechanism
in many places, often using a method reference so we can pass in a
`LongSupplier` for ease of testing. Each time we create a new method
reference we're opening ourselves up to potential cache misses on the
way through to the two volatile reads needed to actually return the
value in production. We're also only caching the nanos value so we must
do some arithmethic to convert it to millis each time.
This commit introduces a constant `LongSupplier` for use in all these
places, saving the allocation and the indirection through different
objects, and caching the millis value alongside the nanos to avoid the
arithmetic on each call.
Relates #104273
To simplify the migration away from version based skip checks in YAML specs,
this PR adds a synthetic version feature `gte_vX.Y.Z` for any version at or before 8.14.0.
New test specs for 8.14 or later are expected to use respective new cluster features,
or a test-only feature supplied via ESRestTestCase#createAdditionalFeatureSpecifications
if sufficient.
The issue happens when we try to use multiple stored fields through
the FetchFieldsPhase, which we do when using `_fields` since
we have a single shared instance of SingleFieldsVisitor per field
and document and use a shared `currentValues` array.
in order to avoid adding yet anther parameter to createComponents
a Tracer interface is replaced with TelemetryProvider.
this allows to get both Tracer and Metric (in the future) interfaces
This commit renames the tracing to telemetry.tracing in both xpack/APM and elasticserach's org.elasticsearch.tracing.Tracer (the api)
the xpack/APM is renamed as follows:
org.elasticsearch.telemetry.apm - the only exported package
org.elasticsearch.telemetry.apm.settings - APMSettings
org.elasticsearch.telemetry.apm.tracing - APMTracer
org.elasticsearch.tracing.Tracer is moved to org.elasticsearch.telemetry.tracing.Tracer (responsible for majority of the changes in this PR)
Fixes#82794. Upgrade the spotless plugin, which addresses the issue
around formatting `instanceof` expressions. Formatting of statements
including lambdas seems to have improved too.
* extract ECS compatibility modes
* moves builtin patterns loading logic to its own class
* split loading logic into their own method and extracting commonalities
* 2 new alias methods to clearly state what pattern are you trying to get
Currently Elasticsearch always returns a shard failure once a runtime error arises from using a runtime field, the exception being script-less runtime fields. This also means that execution of the query for that shard stops, which is okay for development and exploration. In a production scenario, however, it is often desirable to ignore runtime errors and continue with the query execution.
This change adds a new a new on_script_error parameter to runtime field definitions similar to the already existing
parameter for index-time scripted fields. When `on_script_error` is set to `continue`, errors from script execution are effectively ignored. This means affected documents don't show up in query results, but also don't prevent other matches from the same shard. Runtime fields accessed through the fields API don't return values on errors, aggregations will ignore documents that throw errors.
Note that this change affects scripted runtime fields only, while leaving default behaviour untouched. Also, ignored errors are not reported back to users for now.
Relates to #72143
This commit adds a new test framework for configuring and orchestrating
test clusters for both Java and YAML REST testing. This will eventually
replace the existing "test-clusters" Gradle plugin and the build-time
cluster orchestration.
With this change we are adding the allocation deciders
in create components we can simplify the use in the
Autoscaling plugin and implement reserved state handler
in the future.
When calling RuntimeField.parseRuntimeFields() for fields defined in the
search request, we need to wrap the Map containing field definitions in another
Map that supports value removal, so that we don't inadvertently remove the
definitions from the root request. CompositeRuntimeField was not doing this
extra wrapping, which meant that requests that went to multiple shards and
that therefore parsed the definitions multiple times would throw an error
complaining that the fields parameter was missing, because the root request
had been modified.
This formats the result of the `fields` section of the `_search` API for
runtime `geo_point` fields using the `format` parameter like we do for
non-runtime `geo_point` fields. This changes the default format for
those fields from `lat, lon` to `geojson` with the option to get `wkt`
or any other format we support.
The fix does so by preserving the `double, double` nature of the
`geo_point` rather than encoding it immediately in the script. Callers can
use the results. The field fetchers use the `double, double` natively,
preserving as much precision as possible. The queries quantize the points
exactly like lucene indexing does. And like the script did before this Pr.
Closes#85245
Part of #84369. Split out from #87696. Introduce tracing interfaces in
advance of adding APM support to Elasticsearch. The only implementation
at this point is a no-op class.
Ensure projects with only yaml, java or cluster tests also apply precommit checks.
We only apply testingconventions now for projects with existing src test folder
as the TestingConventionsTask is incompatible with projects with no test sourceSet.
This is a prerequisite to port more projects away from using StandaloneRestTestPlugin
and RestTestPlugin in favor of yaml, java or cluster tests with dedicated sourceSets.
Also we fix deprecation warnings for forbiddenPattern and filePermissions tasks about
implicit non declared dependencies on resources tasks
Fixes split packages between server and the LLRC (and HLRC), by renaming
the server package to a more appropriate name that represents the fact
that is in an internal client. That is, rename server's
org.elasticsearch.client to org.elasticsearch.client.internal.
Fix the split package org.elasticsearch.common.xcontent, between server and the x-content lib. Move the x-content lib exported package from org.elasticsearch.common.xcontent to org.elasticsearch.xcontent ( following the naming convention of similar libraries ). Removing split packages is a prerequisite to modularization.
This introduces a basic public yaml rest test plugin that is supposed to be used by external
elasticsearch plugin authors. This is driven by #76215
- Rename yaml-rest-test to intern-yaml-rest-test
- Use public yaml plugin in example plugins
Co-authored-by: Mark Vieira <portugee@gmail.com>
Composite runtime fields do not have a mapped type - add null check, test and Nullable annotation to SearchExecutionContext.getObjectMapper(name)
Closes#76716
We have recently introduced support for grok and dissect to the runtime fields
Painless context that allows to split a field into multiple fields. However, each runtime
field can only emit values for a single field. This commit introduces support for emitting
multiple fields from the same script.
The API call to define a runtime field that emits multiple fields is the following:
```
PUT localhost:9200/logs/_mappings
{
"runtime" : {
"log" : {
"type" : "composite",
"script" : "emit(grok(\"%{COMMONAPACHELOG}\").extract(doc[\"message.keyword\"].value))",
"fields" : {
"clientip" : {
"type" : "ip"
},
"response" : {
"type" : "long"
}
}
}
}
}
```
The script context for this new field type accepts two emit signatures:
* `emit(String, Object)`
* `emit(Map)`
Sub-fields need to be declared under fields in order to be discoverable through
the field_caps API and accessible through the search API.
The way that it emits multiple fields is by returning multiple MappedFieldTypes
from RuntimeField#asMappedFieldTypes. The sub-fields are instances of the
runtime fields that are already supported, with a little tweak to adapt the script
defined by their parent to an artificial script factory for each of the sub-fields
that makes its corresponding sub-field accessible. This approach allows to reuse
all of the existing runtime fields code for the sub-fields.
The runtime section has been flat so far as it has not supported objects until now.
That stays the same, meaning that runtime fields can have dots in their names.
Because there are though two ways to create the same field with the introduction
of the ability to emit multiple fields, we have to make sure that a runtime field with
a certain name cannot be defined twice, which is why the following mappings are
rejected with the error `Found two runtime fields with same name [log.response]`:
```
PUT localhost:9200/logs/_mappings
{
"runtime" : {
"log.response" : {
"type" : "keyword"
},
"log" : {
"type" : "composite",
"script" : "emit(\"response\", grok(\"%{COMMONAPACHELOG}\").extract(doc[\"message.keyword\"].value)?.response)",
"fields" : {
"response" : {
"type" : "long"
}
}
}
}
}
```
Closes#68203
Change the formatter config to sort / order imports, and reformat the
codebase. We already had a config file for Eclipse users, so Spotless now
uses that.
The "Eclipse Code Formatter" plugin ought to be able to use this file as
well for import ordering, but in my experiments the results were poor.
Instead, use IntelliJ's `.editorconfig` support to configure import
ordering.
I've also added a config file for the formatter plugin.
Other changes:
* I've quietly enabled the `toggleOnOff` option for Spotless. It was
already possible to disable formatting for sections using the markers
for docs snippets, so enabling this option just accepts this reality
and makes it possible via `formatter:off` and `formatter:on` without
the restrictions around line length. It should still only be used as
a very last resort and with good reason.
* I've removed mention of the `paddedCell` option from the contributing
guide, since I haven't had to use that option for a very long time. I
moved the docs to the spotless config.
When libs/core was created, several classes were moved from server's
o.e.common package, but they were not moved to a new package. Split
packages need to go away long term, so that Elasticsearch can even think
about modularization. This commit moves all the classes under o.e.common
in core to o.e.core.
relates #73784
DoubleScriptFieldRangeQuery which is used on runtime fields of type "double"
currently uses simple double type comparison for checking its upper and lower
bounds. Unfortunately it seems that -0.0 == 0.0, but when we want to exclude a
0.0 bound via "lt" the generated range query uses -0.0 as its upper bound which
erroneously includes the 0.0 value. We can use `Double.compare` instead which
seems to handle this edge case well.
Closes#71786
This commit adds the ability to define an index-time geo_point field
with a script parameter, allowing you to calculate points from other
values within the indexed document.
There's a few places where we need to access all of the supported runtime fields script contexts. Up until now we have listed them in all those places, but a better way would be to have them listed in one place and access that same list from all consumers. This is what this commit introduces.
Along with the introduction of runtime fields contexts in ScriptModule, we rename the whitelist files so that they contain their corresponding context name to simplify looking them up.
This commit allows you to set 'script' and 'on_script_error' parameters
on date field mappers, meaning that runtime date fields can be made indexed
simply by moving their definitions from the runtime section of the mappings
to the properties section.
This commit allows you to set 'script' and 'on_script_error' parameters
on IP field mappers, meaning that runtime IP fields can be made indexed
simply by moving their definitions from the runtime section of the mappings
to the properties section.
Currently we don't report any exceptions occuring during field_caps requests back to the user.
This PR adds a new failure section to the response which contains exceptions per index.
In addition the response contains another field, `failed_indices`, with the number of indices that threw
an exception. If all of the requested indices fail, the whole request fails, otherwise the request succeeds
and it is up to the caller to check for potential errors in the response body.
Closes#68994