Migrated documentation into the main repo

This commit is contained in:
Clinton Gormley 2013-08-29 01:24:34 +02:00
parent b9558edeff
commit 822043347e
316 changed files with 23987 additions and 0 deletions

View file

@ -0,0 +1,38 @@
[[bulk]]
== Bulk API
The bulk API allows one to index and delete several documents in a
single request. Here is a sample usage:
[source,java]
--------------------------------------------------
import static org.elasticsearch.common.xcontent.XContentFactory.*;
BulkRequestBuilder bulkRequest = client.prepareBulk();
// either use client#prepare, or use Requests# to directly build index/delete requests
bulkRequest.add(client.prepareIndex("twitter", "tweet", "1")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "trying out Elastic Search")
.endObject()
)
);
bulkRequest.add(client.prepareIndex("twitter", "tweet", "2")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "another post")
.endObject()
)
);
BulkResponse bulkResponse = bulkRequest.execute().actionGet();
if (bulkResponse.hasFailures()) {
// process failures by iterating through each bulk response item
}
--------------------------------------------------

View file

@ -0,0 +1,185 @@
[[client]]
== Client
You can use the *java client* in multiple ways:
* Perform standard <<index_,index>>, <<get,get>>,
<<delete,delete>> and <<search,search>> operations on an
existing cluster
* Perform administrative tasks on a running cluster
* Start full nodes when you want to run Elasticsearch embedded in your
own application or when you want to launch unit or integration tests
Obtaining an elasticsearch `Client` is simple. The most common way to
get a client is by:
1. creating an embedded link:#nodeclient[`Node`] that acts as a node
within a cluster
2. requesting a `Client` from your embedded `Node`.
Another manner is by creating a link:#transportclient[`TransportClient`]
that connects to a cluster.
*Important:*
______________________________________________________________________________________________________________________________________________________________
Please note that you are encouraged to use the same version on client
and cluster sides. You may hit some incompatibilities issues when mixing
major versions.
______________________________________________________________________________________________________________________________________________________________
[float]
=== Node Client
Instantiating a node based client is the simplest way to get a `Client`
that can execute operations against elasticsearch.
[source,java]
--------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
// on startup
Node node = nodeBuilder().node();
Client client = node.client();
// on shutdown
node.close();
--------------------------------------------------
When you start a `Node`, it joins an elasticsearch cluster. You can have
different clusters by simple setting the `cluster.name` setting, or
explicitly using the `clusterName` method on the builder.
You can define `cluster.name` in `/src/main/resources/elasticsearch.yml`
dir in your project. As long as `elasticsearch.yml` is present in the
classloader, it will be used when you start your node.
[source,java]
--------------------------------------------------
cluster.name=yourclustername
--------------------------------------------------
Or in Java:
[source,java]
--------------------------------------------------
Node node = nodeBuilder().clusterName("yourclustername").node();
Client client = node.client();
--------------------------------------------------
The benefit of using the `Client` is the fact that operations are
automatically routed to the node(s) the operations need to be executed
on, without performing a "double hop". For example, the index operation
will automatically be executed on the shard that it will end up existing
at.
When you start a `Node`, the most important decision is whether it
should hold data or not. In other words, should indices and shards be
allocated to it. Many times we would like to have the clients just be
clients, without shards being allocated to them. This is simple to
configure by setting either `node.data` setting to `false` or
`node.client` to `true` (the `NodeBuilder` respective helper methods on
it):
[source,java]
--------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
// on startup
Node node = nodeBuilder().client(true).node();
Client client = node.client();
// on shutdown
node.close();
--------------------------------------------------
Another common usage is to start the `Node` and use the `Client` in
unit/integration tests. In such a case, we would like to start a "local"
`Node` (with a "local" discovery and transport). Again, this is just a
matter of a simple setting when starting the `Node`. Note, "local" here
means local on the JVM (well, actually class loader) level, meaning that
two *local* servers started within the same JVM will discover themselves
and form a cluster.
[source,java]
--------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
// on startup
Node node = nodeBuilder().local(true).node();
Client client = node.client();
// on shutdown
node.close();
--------------------------------------------------
[float]
=== Transport Client
The `TransportClient` connects remotely to an elasticsearch cluster
using the transport module. It does not join the cluster, but simply
gets one or more initial transport addresses and communicates with them
in round robin fashion on each action (though most actions will probably
be "two hop" operations).
[source,java]
--------------------------------------------------
// on startup
Client client = new TransportClient()
.addTransportAddress(new InetSocketTransportAddress("host1", 9300))
.addTransportAddress(new InetSocketTransportAddress("host2", 9300));
// on shutdown
client.close();
--------------------------------------------------
Note that you have to set the cluster name if you use one different to
"elasticsearch":
[source,java]
--------------------------------------------------
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "myClusterName").build();
Client client = new TransportClient(settings);
//Add transport addresses and do something with the client...
--------------------------------------------------
Or using `elasticsearch.yml` file as shown in the link:#nodeclient[Node
Client section]
The client allows to sniff the rest of the cluster, and add those into
its list of machines to use. In this case, note that the ip addresses
used will be the ones that the other nodes were started with (the
"publish" address). In order to enable it, set the
`client.transport.sniff` to `true`:
[source,java]
--------------------------------------------------
Settings settings = ImmutableSettings.settingsBuilder()
.put("client.transport.sniff", true).build();
TransportClient client = new TransportClient(settings);
--------------------------------------------------
Other transport client level settings include:
[cols="<,<",options="header",]
|=======================================================================
|Parameter |Description
|`client.transport.ignore_cluster_name` |Set to `true` to ignore cluster
name validation of connected nodes. (since 0.19.4)
|`client.transport.ping_timeout` |The time to wait for a ping response
from a node. Defaults to `5s`.
|`client.transport.nodes_sampler_interval` |How often to sample / ping
the nodes listed and connected. Defaults to `5s`.
|=======================================================================

View file

@ -0,0 +1,38 @@
[[count]]
== Count API
The count API allows to easily execute a query and get the number of
matches for that query. It can be executed across one or more indices
and across one or more types. The query can be provided using the
link:{ref}/query-dsl.html[Query DSL].
[source,java]
--------------------------------------------------
import static org.elasticsearch.index.query.xcontent.FilterBuilders.*;
import static org.elasticsearch.index.query.xcontent.QueryBuilders.*;
CountResponse response = client.prepareCount("test")
.setQuery(termQuery("_type", "type1"))
.execute()
.actionGet();
--------------------------------------------------
For more information on the count operation, check out the REST
link:{ref}/search-count.html[count] docs.
[float]
=== Operation Threading
The count API allows to set the threading model the operation will be
performed when the actual execution of the API is performed on the same
node (the API is executed on a shard that is allocated on the same
server).
There are three threading modes.The `NO_THREADS` mode means that the
count operation will be executed on the calling thread. The
`SINGLE_THREAD` mode means that the count operation will be executed on
a single different thread for all local shards. The `THREAD_PER_SHARD`
mode means that the count operation will be executed on a different
thread for each local shard.
The default mode is `SINGLE_THREAD`.

View file

@ -0,0 +1,21 @@
[[delete-by-query]]
== Delete By Query API
The delete by query API allows to delete documents from one or more
indices and one or more types based on a <<query-dsl-queries,query>>. Here
is an example:
[source,java]
--------------------------------------------------
import static org.elasticsearch.index.query.FilterBuilders.*;
import static org.elasticsearch.index.query.QueryBuilders.*;
DeleteByQueryResponse response = client.prepareDeleteByQuery("test")
.setQuery(termQuery("_type", "type1"))
.execute()
.actionGet();
--------------------------------------------------
For more information on the delete by query operation, check out the
link:{ref}/docs-delete-by-query.html[delete_by_query API]
docs.

View file

@ -0,0 +1,39 @@
[[delete]]
== Delete API
The delete API allows to delete a typed JSON document from a specific
index based on its id. The following example deletes the JSON document
from an index called twitter, under a type called tweet, with id valued
1:
[source,java]
--------------------------------------------------
DeleteResponse response = client.prepareDelete("twitter", "tweet", "1")
.execute()
.actionGet();
--------------------------------------------------
For more information on the delete operation, check out the
link:{ref}/docs-delete.html[delete API] docs.
[float]
=== Operation Threading
The delete API allows to set the threading model the operation will be
performed when the actual execution of the API is performed on the same
node (the API is executed on a shard that is allocated on the same
server).
The options are to execute the operation on a different thread, or to
execute it on the calling thread (note that the API is still async). By
default, `operationThreaded` is set to `true` which means the operation
is executed on a different thread. Here is an example that sets it to
`false`:
[source,java]
--------------------------------------------------
DeleteResponse response = client.prepareDelete("twitter", "tweet", "1")
.setOperationThreaded(false)
.execute()
.actionGet();
--------------------------------------------------

View file

@ -0,0 +1,483 @@
[[facets]]
== Facets
Elasticsearch provides a full Java API to play with facets. See the
link:{ref}/search-facets.html[Facets guide].
Use the factory for facet builders (`FacetBuilders`) and add each facet
you want to compute when querying and add it to your search request:
[source,java]
--------------------------------------------------
SearchResponse sr = node.client().prepareSearch()
.setQuery( /* your query */ )
.addFacet( /* add a facet */ )
.execute().actionGet();
--------------------------------------------------
Note that you can add more than one facet. See
link:{ref}/search-search.html[Search Java API] for details.
To build facet requests, use `FacetBuilders` helpers. Just import them
in your class:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.FacetBuilders.*;
--------------------------------------------------
[float]
=== Facets
[float]
==== Terms Facet
Here is how you can use
link:{ref}/search-facets-terms-facet.html[Terms Facet]
with Java API.
[float]
===== Prepare facet request
Here is an example on how to create the facet request:
[source,java]
--------------------------------------------------
FacetBuilders.termsFacet("f")
.field("brand")
.size(10);
--------------------------------------------------
[float]
===== Use facet response
Import Facet definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.terms.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
TermsFacet f = (TermsFacet) sr.facets().facetsAsMap().get("f");
f.getTotalCount(); // Total terms doc count
f.getOtherCount(); // Not shown terms doc count
f.getMissingCount(); // Without term doc count
// For each entry
for (TermsFacet.Entry entry : f) {
entry.getTerm(); // Term
entry.getCount(); // Doc count
}
--------------------------------------------------
[float]
==== Range Facet
Here is how you can use
link:{ref}/search-facets-range-facet.html[Range Facet]
with Java API.
[float]
===== Prepare facet request
Here is an example on how to create the facet request:
[source,java]
--------------------------------------------------
FacetBuilders.rangeFacet("f")
.field("price") // Field to compute on
.addUnboundedFrom(3) // from -infinity to 3 (excluded)
.addRange(3, 6) // from 3 to 6 (excluded)
.addUnboundedTo(6); // from 6 to +infinity
--------------------------------------------------
[float]
===== Use facet response
Import Facet definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.range.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
RangeFacet f = (RangeFacet) sr.facets().facetsAsMap().get("f");
// For each entry
for (RangeFacet.Entry entry : f) {
entry.getFrom(); // Range from requested
entry.getTo(); // Range to requested
entry.getCount(); // Doc count
entry.getMin(); // Min value
entry.getMax(); // Max value
entry.getMean(); // Mean
entry.getTotal(); // Sum of values
}
--------------------------------------------------
[float]
==== Histogram Facet
Here is how you can use
link:{ref}/search-facets-histogram-facet.html[Histogram
Facet] with Java API.
[float]
===== Prepare facet request
Here is an example on how to create the facet request:
[source,java]
--------------------------------------------------
HistogramFacetBuilder facet = FacetBuilders.histogramFacet("f")
.field("price")
.interval(1);
--------------------------------------------------
[float]
===== Use facet response
Import Facet definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.histogram.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
HistogramFacet f = (HistogramFacet) sr.facets().facetsAsMap().get("f");
// For each entry
for (HistogramFacet.Entry entry : f) {
entry.getKey(); // Key (X-Axis)
entry.getCount(); // Doc count (Y-Axis)
}
--------------------------------------------------
[float]
==== Date Histogram Facet
Here is how you can use
link:{ref}/search-facets-date-histogram-facet.html[Date
Histogram Facet] with Java API.
[float]
===== Prepare facet request
Here is an example on how to create the facet request:
[source,java]
--------------------------------------------------
FacetBuilders.dateHistogramFacet("f")
.field("date") // Your date field
.interval("year"); // You can also use "quarter", "month", "week", "day",
// "hour" and "minute" or notation like "1.5h" or "2w"
--------------------------------------------------
[float]
===== Use facet response
Import Facet definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.datehistogram.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
DateHistogramFacet f = (DateHistogramFacet) sr.facets().facetsAsMap().get("f");
// For each entry
for (DateHistogramFacet.Entry entry : f) {
entry.getTime(); // Date in ms since epoch (X-Axis)
entry.getCount(); // Doc count (Y-Axis)
}
--------------------------------------------------
[float]
==== Filter Facet (not facet filter)
Here is how you can use
link:{ref}/search-facets-filter-facet.html[Filter Facet]
with Java API.
If you are looking on how to apply a filter to a facet, have a look at
link:#facet-filter[facet filter] using Java API.
[float]
===== Prepare facet request
Here is an example on how to create the facet request:
[source,java]
--------------------------------------------------
FacetBuilders.filterFacet("f",
FilterBuilders.termFilter("brand", "heineken")); // Your Filter here
--------------------------------------------------
See <<query-dsl-filters,Filters>> to
learn how to build filters using Java.
[float]
===== Use facet response
Import Facet definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.filter.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
FilterFacet f = (FilterFacet) sr.facets().facetsAsMap().get("f");
f.getCount(); // Number of docs that matched
--------------------------------------------------
[float]
==== Query Facet
Here is how you can use
link:{ref}/search-facets-query-facet.html[Query Facet]
with Java API.
[float]
===== Prepare facet request
Here is an example on how to create the facet request:
[source,java]
--------------------------------------------------
FacetBuilders.queryFacet("f",
QueryBuilders.matchQuery("brand", "heineken"));
--------------------------------------------------
[float]
===== Use facet response
Import Facet definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.query.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
QueryFacet f = (QueryFacet) sr.facets().facetsAsMap().get("f");
f.getCount(); // Number of docs that matched
--------------------------------------------------
See <<query-dsl-queries,Queries>> to
learn how to build queries using Java.
[float]
==== Statistical
Here is how you can use
link:{ref}/search-facets-statistical-facet.html[Statistical
Facet] with Java API.
[float]
===== Prepare facet request
Here is an example on how to create the facet request:
[source,java]
--------------------------------------------------
FacetBuilders.statisticalFacet("f")
.field("price");
--------------------------------------------------
[float]
===== Use facet response
Import Facet definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.statistical.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
StatisticalFacet f = (StatisticalFacet) sr.facets().facetsAsMap().get("f");
f.getCount(); // Doc count
f.getMin(); // Min value
f.getMax(); // Max value
f.getMean(); // Mean
f.getTotal(); // Sum of values
f.getStdDeviation(); // Standard Deviation
f.getSumOfSquares(); // Sum of Squares
f.getVariance(); // Variance
--------------------------------------------------
[float]
==== Terms Stats Facet
Here is how you can use
link:{ref}/search-facets-terms-stats-facet.html[Terms
Stats Facet] with Java API.
[float]
===== Prepare facet request
Here is an example on how to create the facet request:
[source,java]
--------------------------------------------------
FacetBuilders.termsStatsFacet("f")
.keyField("brand")
.valueField("price");
--------------------------------------------------
[float]
===== Use facet response
Import Facet definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.termsstats.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
TermsStatsFacet f = (TermsStatsFacet) sr.facets().facetsAsMap().get("f");
f.getTotalCount(); // Total terms doc count
f.getOtherCount(); // Not shown terms doc count
f.getMissingCount(); // Without term doc count
// For each entry
for (TermsStatsFacet.Entry entry : f) {
entry.getTerm(); // Term
entry.getCount(); // Doc count
entry.getMin(); // Min value
entry.getMax(); // Max value
entry.getMean(); // Mean
entry.getTotal(); // Sum of values
}
--------------------------------------------------
[float]
==== Geo Distance Facet
Here is how you can use
link:{ref}/search-facets-geo-distance-facet.html[Geo
Distance Facet] with Java API.
[float]
===== Prepare facet request
Here is an example on how to create the facet request:
[source,java]
--------------------------------------------------
FacetBuilders.geoDistanceFacet("f")
.field("pin.location") // Field containing coordinates we want to compare with
.point(40, -70) // Point from where we start (0)
.addUnboundedFrom(10) // 0 to 10 km (excluded)
.addRange(10, 20) // 10 to 20 km (excluded)
.addRange(20, 100) // 20 to 100 km (excluded)
.addUnboundedTo(100) // from 100 km to infinity (and beyond ;-) )
.unit(DistanceUnit.KILOMETERS); // All distances are in kilometers. Can be MILES
--------------------------------------------------
[float]
===== Use facet response
Import Facet definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.facet.geodistance.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
GeoDistanceFacet f = (GeoDistanceFacet) sr.facets().facetsAsMap().get("f");
// For each entry
for (GeoDistanceFacet.Entry entry : f) {
entry.getFrom(); // Distance from requested
entry.getTo(); // Distance to requested
entry.getCount(); // Doc count
entry.getMin(); // Min value
entry.getMax(); // Max value
entry.getTotal(); // Sum of values
entry.getMean(); // Mean
}
--------------------------------------------------
[float]
=== Facet filters (not Filter Facet)
By default, facets are applied on the query resultset whatever filters
exists or are.
If you need to compute facets with the same filters or even with other
filters, you can add the filter to any facet using
`AbstractFacetBuilder#facetFilter(FilterBuilder)` method:
[source,java]
--------------------------------------------------
FacetBuilders
.termsFacet("f").field("brand") // Your facet
.facetFilter( // Your filter here
FilterBuilders.termFilter("colour", "pale")
);
--------------------------------------------------
For example, you can reuse the same filter you created for your query:
[source,java]
--------------------------------------------------
// A common filter
FilterBuilder filter = FilterBuilders.termFilter("colour", "pale");
TermsFacetBuilder facet = FacetBuilders.termsFacet("f")
.field("brand")
.facetFilter(filter); // We apply it to the facet
SearchResponse sr = node.client().prepareSearch()
.setQuery(QueryBuilders.matchAllQuery())
.setFilter(filter) // We apply it to the query
.addFacet(facet)
.execute().actionGet();
--------------------------------------------------
See documentation on how to build
<<query-dsl-filters,Filters>>.
[float]
=== Scope
By default, facets are computed within the query resultset. But, you can
compute facets from all documents in the index whatever the query is,
using `global` parameter:
[source,java]
--------------------------------------------------
TermsFacetBuilder facet = FacetBuilders.termsFacet("f")
.field("brand")
.global(true);
--------------------------------------------------

View file

@ -0,0 +1,38 @@
[[get]]
== Get API
The get API allows to get a typed JSON document from the index based on
its id. The following example gets a JSON document from an index called
twitter, under a type called tweet, with id valued 1:
[source,java]
--------------------------------------------------
GetResponse response = client.prepareGet("twitter", "tweet", "1")
.execute()
.actionGet();
--------------------------------------------------
For more information on the index operation, check out the REST
link:{ref}/docs-get.html[get] docs.
[float]
=== Operation Threading
The get API allows to set the threading model the operation will be
performed when the actual execution of the API is performed on the same
node (the API is executed on a shard that is allocated on the same
server).
The options are to execute the operation on a different thread, or to
execute it on the calling thread (note that the API is still async). By
default, `operationThreaded` is set to `true` which means the operation
is executed on a different thread. Here is an example that sets it to
`false`:
[source,java]
--------------------------------------------------
GetResponse response = client.prepareGet("twitter", "tweet", "1")
.setOperationThreaded(false)
.execute()
.actionGet();
--------------------------------------------------

View file

@ -0,0 +1,61 @@
[[java-api]]
= Java API
:ref: http://www.elasticsearch.org/guide/elasticsearch/reference/current
[preface]
== Preface
This section describes the Java API that elasticsearch provides. All
elasticsearch operations are executed using a
<<client,Client>> object. All
operations are completely asynchronous in nature (either accepts a
listener, or return a future).
Additionally, operations on a client may be accumulated and executed in
<<bulk,Bulk>>.
Note, all the APIs are exposed through the
Java API (actually, the Java API is used internally to execute them).
[float]
== Maven Repository
Elasticsearch is hosted on
http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22elasticsearch%22[Maven
Central].
For example, you can define the latest version in your `pom.xml` file:
[source,xml]
--------------------------------------------------
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${es.version}</version>
</dependency>
--------------------------------------------------
include::client.asciidoc[]
include::index_.asciidoc[]
include::get.asciidoc[]
include::delete.asciidoc[]
include::bulk.asciidoc[]
include::search.asciidoc[]
include::count.asciidoc[]
include::delete-by-query.asciidoc[]
include::facets.asciidoc[]
include::percolate.asciidoc[]
include::query-dsl-queries.asciidoc[]
include::query-dsl-filters.asciidoc[]

View file

@ -0,0 +1,201 @@
[[index_]]
== Index API
The index API allows one to index a typed JSON document into a specific
index and make it searchable.
[float]
=== Generate JSON document
There are different way of generating JSON document:
* Manually (aka do it yourself) using native `byte[]` or as a `String`
* Using `Map` that will be automatically converted to its JSON
equivalent
* Using a third party library to serialize your beans such as
http://wiki.fasterxml.com/JacksonHome[Jackson]
* Using built-in helpers XContentFactory.jsonBuilder()
Internally, each type is converted to `byte[]` (so a String is converted
to a `byte[]`). Therefore, if the object is in this form already, then
use it. The `jsonBuilder` is highly optimized JSON generator that
directly constructs a `byte[]`.
[float]
==== Do It Yourself
Nothing really difficult here but note that you will have to encode
dates regarding to the
link:{ref}/mapping-date-format.html[Date Format].
[source,java]
--------------------------------------------------
String json = "{" +
"\"user\":\"kimchy\"," +
"\"postDate\":\"2013-01-30\"," +
"\"message\":\"trying out Elastic Search\"," +
"}";
--------------------------------------------------
[float]
==== Using Map
Map is a key:values pair collection. It represents very well a JSON
structure:
[source,java]
--------------------------------------------------
Map<String, Object> json = new HashMap<String, Object>();
json.put("user","kimchy");
json.put("postDate",new Date());
json.put("message","trying out Elastic Search");
--------------------------------------------------
[float]
==== Serialize your beans
Elasticsearch already use Jackson but shade it under
`org.elasticsearch.common.jackson` package. +
So, you can add your own Jackson version in your `pom.xml` file or in
your classpath. See http://wiki.fasterxml.com/JacksonDownload[Jackson
Download Page].
For example:
[source,java]
--------------------------------------------------
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.1.3</version>
</dependency>
--------------------------------------------------
Then, you can start serializing your beans to JSON:
[source,java]
--------------------------------------------------
import com.fasterxml.jackson.databind.*;
// instance a json mapper
ObjectMapper mapper = new ObjectMapper(); // create once, reuse
// generate json
String json = mapper.writeValueAsString(yourbeaninstance);
--------------------------------------------------
[float]
==== Use Elasticsearch helpers
Elasticsearch provides built-in helpers to generate JSON content.
[source,java]
--------------------------------------------------
import static org.elasticsearch.common.xcontent.XContentFactory.*;
XContentBuilder builder = jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "trying out Elastic Search")
.endObject()
--------------------------------------------------
Note that you can also add arrays with `startArray(String)` and
`endArray()` methods. By the way, `field` method +
accept many object types. You can pass directly numbers, dates and even
other XContentBuilder objects.
If you need to see the generated JSON content, you can use the
@string()@method.
[source,java]
--------------------------------------------------
String json = builder.string();
--------------------------------------------------
[float]
=== Index document
The following example indexes a JSON document into an index called
twitter, under a type called tweet, with id valued 1:
[source,java]
--------------------------------------------------
import static org.elasticsearch.common.xcontent.XContentFactory.*;
IndexResponse response = client.prepareIndex("twitter", "tweet", "1")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "trying out Elastic Search")
.endObject()
)
.execute()
.actionGet();
--------------------------------------------------
Note that you can also index your documents as JSON String and that you
don't have to give an ID:
[source,java]
--------------------------------------------------
String json = "{" +
"\"user\":\"kimchy\"," +
"\"postDate\":\"2013-01-30\"," +
"\"message\":\"trying out Elastic Search\"," +
"}";
IndexResponse response = client.prepareIndex("twitter", "tweet")
.setSource(json)
.execute()
.actionGet();
--------------------------------------------------
`IndexResponse` object will give you report:
[source,java]
--------------------------------------------------
// Index name
String _index = response.index();
// Type name
String _type = response.type();
// Document ID (generated or not)
String _id = response.id();
// Version (if it's the first time you index this document, you will get: 1)
long _version = response.version();
--------------------------------------------------
If you use percolation while indexing, `IndexResponse` object will give
you percolator that have matched:
[source,java]
--------------------------------------------------
IndexResponse response = client.prepareIndex("twitter", "tweet", "1")
.setSource(json)
.setPercolate("*")
.execute()
.actionGet();
List<String> matches = response.matches();
--------------------------------------------------
For more information on the index operation, check out the REST
link:{ref}/docs-index_.html[index] docs.
[float]
=== Operation Threading
The index API allows to set the threading model the operation will be
performed when the actual execution of the API is performed on the same
node (the API is executed on a shard that is allocated on the same
server).
The options are to execute the operation on a different thread, or to
execute it on the calling thread (note that the API is still async). By
default, `operationThreaded` is set to `true` which means the operation
is executed on a different thread.

View file

@ -0,0 +1,48 @@
[[percolate]]
== Percolate API
The percolator allows to register queries against an index, and then
send `percolate` requests which include a doc, and getting back the
queries that match on that doc out of the set of registered queries.
Read the main {ref}/search-percolate.html[percolate]
documentation before reading this guide.
[source,java]
--------------------------------------------------
//This is the query we're registering in the percolator
QueryBuilder qb = termQuery("content", "amazing");
//Index the query = register it in the percolator
client.prepareIndex("_percolator", "myIndexName", "myDesignatedQueryName")
.setSource(jsonBuilder()
.startObject()
.field("query", qb) // Register the query
.endObject())
.setRefresh(true) // Needed when the query shall be available immediately
.execute().actionGet();
--------------------------------------------------
This indexes the above term query under the name
*myDesignatedQueryName*.
In order to check a document against the registered queries, use this
code:
[source,java]
--------------------------------------------------
//Build a document to check against the percolator
XContentBuilder docBuilder = XContentFactory.jsonBuilder().startObject();
docBuilder.field("doc").startObject(); //This is needed to designate the document
docBuilder.field("content", "This is amazing!");
docBuilder.endObject(); //End of the doc field
docBuilder.endObject(); //End of the JSON root object
//Percolate
PercolateResponse response =
client.preparePercolate("myIndexName", "myDocumentType").setSource(docBuilder).execute().actionGet();
//Iterate over the results
for(String result : response) {
//Handle the result which is the name of
//the query in the percolator
}
--------------------------------------------------

View file

@ -0,0 +1,459 @@
[[query-dsl-filters]]
== Query DSL - Filters
elasticsearch provides a full Java query dsl in a similar manner to the
REST link:{ref}/query-dsl.html[Query DSL]. The factory for filter
builders is `FilterBuilders`.
Once your query is ready, you can use the <<search,Search API>>.
See also how to build <<query-dsl-queries,Queries>>.
To use `FilterBuilders` just import them in your class:
[source,java]
--------------------------------------------------
import org.elasticsearch.index.query.FilterBuilders.*;
--------------------------------------------------
Note that you can easily print (aka debug) JSON generated queries using
`toString()` method on `FilterBuilder` object.
[float]
=== And Filter
See link:{ref}/query-dsl-and-filter.html[And Filter]
[source,java]
--------------------------------------------------
FilterBuilders.andFilter(
FilterBuilders.rangeFilter("postDate").from("2010-03-01").to("2010-04-01"),
FilterBuilders.prefixFilter("name.second", "ba")
);
--------------------------------------------------
Note that you can cache the result using
`AndFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[float]
=== Bool Filter
See link:{ref}/query-dsl-bool-filter.html[Bool Filter]
[source,java]
--------------------------------------------------
FilterBuilders.boolFilter()
.must(FilterBuilders.termFilter("tag", "wow"))
.mustNot(FilterBuilders.rangeFilter("age").from("10").to("20"))
.should(FilterBuilders.termFilter("tag", "sometag"))
.should(FilterBuilders.termFilter("tag", "sometagtag"));
--------------------------------------------------
Note that you can cache the result using
`BoolFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[float]
=== Exists Filter
See link:{ref}/query-dsl-exists-filter.html[Exists Filter].
[source,java]
--------------------------------------------------
FilterBuilders.existsFilter("user");
--------------------------------------------------
[float]
=== Ids Filter
See link:{ref}/query-dsl-ids-filter.html[IDs Filter]
[source,java]
--------------------------------------------------
FilterBuilders.idsFilter("my_type", "type2").addIds("1", "4", "100");
// Type is optional
FilterBuilders.idsFilter().addIds("1", "4", "100");
--------------------------------------------------
[float]
=== Limit Filter
See link:{ref}/query-dsl-limit-filter.html[Limit Filter]
[source,java]
--------------------------------------------------
FilterBuilders.limitFilter(100);
--------------------------------------------------
[float]
=== Type Filter
See link:{ref}/query-dsl-type-filter.html[Type Filter]
[source,java]
--------------------------------------------------
FilterBuilders.typeFilter("my_type");
--------------------------------------------------
[float]
=== Geo Bounding Box Filter
See link:{ref}/query-dsl-geo-bounding-box-filter.html[Geo
Bounding Box Filter]
[source,java]
--------------------------------------------------
FilterBuilders.geoBoundingBoxFilter("pin.location")
.topLeft(40.73, -74.1)
.bottomRight(40.717, -73.99);
--------------------------------------------------
Note that you can cache the result using
`GeoBoundingBoxFilterBuilder#cache(boolean)` method. See
<<query-dsl-filters-caching>>.
[float]
=== GeoDistance Filter
See link:{ref}/query-dsl-geo-distance-filter.html[Geo
Distance Filter]
[source,java]
--------------------------------------------------
FilterBuilders.geoDistanceFilter("pin.location")
.point(40, -70)
.distance(200, DistanceUnit.KILOMETERS)
.optimizeBbox("memory") // Can be also "indexed" or "none"
.geoDistance(GeoDistance.ARC); // Or GeoDistance.PLANE
--------------------------------------------------
Note that you can cache the result using
`GeoDistanceFilterBuilder#cache(boolean)` method. See
<<query-dsl-filters-caching>>.
[float]
=== Geo Distance Range Filter
See link:{ref}/query-dsl-geo-distance-range-filter.html[Geo
Distance Range Filter]
[source,java]
--------------------------------------------------
FilterBuilders.geoDistanceRangeFilter("pin.location")
.point(40, -70)
.from("200km")
.to("400km")
.includeLower(true)
.includeUpper(false)
.optimizeBbox("memory") // Can be also "indexed" or "none"
.geoDistance(GeoDistance.ARC); // Or GeoDistance.PLANE
--------------------------------------------------
Note that you can cache the result using
`GeoDistanceRangeFilterBuilder#cache(boolean)` method. See
<<query-dsl-filters-caching>>.
[float]
=== Geo Polygon Filter
See link:{ref}/query-dsl-geo-polygon-filter.html[Geo Polygon
Filter]
[source,java]
--------------------------------------------------
FilterBuilders.geoPolygonFilter("pin.location")
.addPoint(40, -70)
.addPoint(30, -80)
.addPoint(20, -90);
--------------------------------------------------
Note that you can cache the result using
`GeoPolygonFilterBuilder#cache(boolean)` method. See
<<query-dsl-filters-caching>>.
[float]
=== Geo Shape Filter
See link:{ref}/query-dsl-geo-shape-filter.html[Geo Shape
Filter]
Note: the `geo_shape` type uses `Spatial4J` and `JTS`, both of which are
optional dependencies. Consequently you must add `Spatial4J` and `JTS`
to your classpath in order to use this type:
[source,xml]
-----------------------------------------------
<dependency>
<groupId>com.spatial4j</groupId>
<artifactId>spatial4j</artifactId>
<version>0.3</version>
</dependency>
<dependency>
<groupId>com.vividsolutions</groupId>
<artifactId>jts</artifactId>
<version>1.12</version>
<exclusions>
<exclusion>
<groupId>xerces</groupId>
<artifactId>xercesImpl</artifactId>
</exclusion>
</exclusions>
</dependency>
-----------------------------------------------
[source,java]
--------------------------------------------------
// Import Spatial4J shapes
import com.spatial4j.core.context.SpatialContext;
import com.spatial4j.core.shape.Shape;
import com.spatial4j.core.shape.impl.RectangleImpl;
// Also import ShapeRelation
import org.elasticsearch.common.geo.ShapeRelation;
--------------------------------------------------
[source,java]
--------------------------------------------------
// Shape within another
filter = FilterBuilders.geoShapeFilter("location",
new RectangleImpl(0,10,0,10,SpatialContext.GEO))
.relation(ShapeRelation.WITHIN);
// Intersect shapes
filter = FilterBuilders.geoShapeFilter("location",
new PointImpl(0, 0, SpatialContext.GEO))
.relation(ShapeRelation.INTERSECTS);
// Using pre-indexed shapes
filter = FilterBuilders.geoShapeFilter("location", "New Zealand", "countries")
.relation(ShapeRelation.DISJOINT);
--------------------------------------------------
[float]
=== Has Child / Has Parent Filters
See:
* link:{ref}/query-dsl-has-child-filter.html[Has Child Filter]
* link:{ref}/query-dsl-has-parent-filter.html[Has Parent Filter]
[source,java]
--------------------------------------------------
// Has Child
QFilterBuilders.hasChildFilter("blog_tag",
QueryBuilders.termQuery("tag", "something"));
// Has Parent
QFilterBuilders.hasParentFilter("blog",
QueryBuilders.termQuery("tag", "something"));
--------------------------------------------------
[float]
=== Match All Filter
See link:{ref}/query-dsl-match-all-filter.html[Match All Filter]
[source,java]
--------------------------------------------------
FilterBuilders.matchAllFilter();
--------------------------------------------------
[float]
=== Missing Filter
See link:{ref}/query-dsl-missing-filter.html[Missing Filter]
[source,java]
--------------------------------------------------
FilterBuilders.missingFilter("user")
.existence(true)
.nullValue(true);
--------------------------------------------------
[float]
=== Not Filter
See link:{ref}/query-dsl-not-filter.html[Not Filter]
[source,java]
--------------------------------------------------
FilterBuilders.notFilter(
FilterBuilders.rangeFilter("price").from("1").to("2"));
--------------------------------------------------
[float]
=== Numeric Range Filter
See link:{ref}/query-dsl-numeric-range-filter.html[Numeric
Range Filter]
[source,java]
--------------------------------------------------
FilterBuilders.numericRangeFilter("age")
.from(10)
.to(20)
.includeLower(true)
.includeUpper(false);
--------------------------------------------------
Note that you can cache the result using
`NumericRangeFilterBuilder#cache(boolean)` method. See
<<query-dsl-filters-caching>>.
[float]
=== Or Filter
See link:{ref}/query-dsl-or-filter.html[Or Filter]
[source,java]
--------------------------------------------------
FilterBuilders.orFilter(
FilterBuilders.termFilter("name.second", "banon"),
FilterBuilders.termFilter("name.nick", "kimchy")
);
--------------------------------------------------
Note that you can cache the result using
`OrFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[float]
=== Prefix Filter
See link:{ref}/query-dsl-prefix-filter.html[Prefix Filter]
[source,java]
--------------------------------------------------
FilterBuilders.prefixFilter("user", "ki");
--------------------------------------------------
Note that you can cache the result using
`PrefixFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[float]
=== Query Filter
See link:{ref}/query-dsl-query-filter.html[Query Filter]
[source,java]
--------------------------------------------------
FilterBuilders.queryFilter(
QueryBuilders.queryString("this AND that OR thus")
);
--------------------------------------------------
Note that you can cache the result using
`QueryFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[float]
=== Range Filter
See link:{ref}/query-dsl-range-filter.html[Range Filter]
[source,java]
--------------------------------------------------
FilterBuilders.rangeFilter("age")
.from("10")
.to("20")
.includeLower(true)
.includeUpper(false);
// A simplified form using gte, gt, lt or lte
FilterBuilders.rangeFilter("age")
.gte("10")
.lt("20");
--------------------------------------------------
Note that you can ask not to cache the result using
`RangeFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[float]
=== Script Filter
See link:{ref}/query-dsl-script-filter.html[Script Filter]
[source,java]
--------------------------------------------------
FilterBuilder filter = FilterBuilders.scriptFilter(
"doc['age'].value > param1"
).addParam("param1", 10);
--------------------------------------------------
Note that you can cache the result using
`ScriptFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[float]
=== Term Filter
See link:{ref}/query-dsl-term-filter.html[Term Filter]
[source,java]
--------------------------------------------------
FilterBuilders.termFilter("user", "kimchy");
--------------------------------------------------
Note that you can ask not to cache the result using
`TermFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[float]
=== Terms Filter
See link:{ref}/query-dsl-terms-filter.html[Terms Filter]
[source,java]
--------------------------------------------------
FilterBuilders.termsFilter("user", "kimchy", "elasticsearch")
.execution("plain"); // Optional, can be also "bool", "and" or "or"
// or "bool_nocache", "and_nocache" or "or_nocache"
--------------------------------------------------
Note that you can ask not to cache the result using
`TermsFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[float]
=== Nested Filter
See link:{ref}/query-dsl-nested-filter.html[Nested Filter]
[source,java]
--------------------------------------------------
FilterBuilders.nestedFilter("obj1",
QueryBuilders.boolQuery()
.must(QueryBuilders.matchQuery("obj1.name", "blue"))
.must(QueryBuilders.rangeQuery("obj1.count").gt(5))
);
--------------------------------------------------
Note that you can ask not to cache the result using
`NestedFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
[[query-dsl-filters-caching]]
[float]
=== Caching
By default, some filters are cached or not cached. You can have a fine
tuning control using `cache(boolean)` method when exists. For example:
[source,java]
--------------------------------------------------
FilterBuilder filter = FilterBuilders.andFilter(
FilterBuilders.rangeFilter("postDate").from("2010-03-01").to("2010-04-01"),
FilterBuilders.prefixFilter("name.second", "ba")
)
.cache(true);
--------------------------------------------------

View file

@ -0,0 +1,489 @@
[[query-dsl-queries]]
== Query DSL - Queries
elasticsearch provides a full Java query dsl in a similar manner to the
REST link:{ref}/query-dsl.html[Query DSL]. The factory for query
builders is `QueryBuilders`. Once your query is ready, you can use the
<<search,Search API>>.
See also how to build <<query-dsl-filters,Filters>>
To use `QueryBuilders` just import them in your class:
[source,java]
--------------------------------------------------
import org.elasticsearch.index.query.QueryBuilders.*;
--------------------------------------------------
Note that you can easily print (aka debug) JSON generated queries using
`toString()` method on `QueryBuilder` object.
The `QueryBuilder` can then be used with any API that accepts a query,
such as `count` and `search`.
[float]
=== Match Query
See link:{ref}/query-dsl-match-query.html[Match Query]
[source,java]
--------------------------------------------------
QueryBuilder qb = QueryBuilders.matchQuery("name", "kimchy elasticsearch");
--------------------------------------------------
[float]
=== MultiMatch Query
See link:{ref}/query-dsl-multi-match-query.html[MultiMatch
Query]
[source,java]
--------------------------------------------------
QueryBuilder qb = QueryBuilders.multiMatchQuery(
"kimchy elasticsearch", // Text you are looking for
"user", "message" // Fields you query on
);
--------------------------------------------------
[float]
=== Boolean Query
See link:{ref}/query-dsl-bool-query.html[Boolean Query]
[source,java]
--------------------------------------------------
QueryBuilder qb = QueryBuilders
.boolQuery()
.must(termQuery("content", "test1"))
.must(termQuery("content", "test4"))
.mustNot(termQuery("content", "test2"))
.should(termQuery("content", "test3"));
--------------------------------------------------
[float]
=== Boosting Query
See link:{ref}/query-dsl-boosting-query.html[Boosting Query]
[source,java]
--------------------------------------------------
QueryBuilders.boostingQuery()
.positive(QueryBuilders.termQuery("name","kimchy"))
.negative(QueryBuilders.termQuery("name","dadoonet"))
.negativeBoost(0.2f);
--------------------------------------------------
[float]
=== IDs Query
See link:{ref}/query-dsl-ids-query.html[IDs Query]
[source,java]
--------------------------------------------------
QueryBuilders.idsQuery().ids("1", "2");
--------------------------------------------------
[float]
=== Custom Score Query
See link:{ref}/query-dsl-custom-score-query.html[Custom Score
Query]
[source,java]
--------------------------------------------------
QueryBuilders.customScoreQuery(QueryBuilders.matchAllQuery()) // Your query here
.script("_score * doc['price'].value"); // Your script here
// If the script have parameters, use the same script and provide parameters to it.
QueryBuilders.customScoreQuery(QueryBuilders.matchAllQuery())
.script("_score * doc['price'].value / pow(param1, param2)")
.param("param1", 2)
.param("param2", 3.1);
--------------------------------------------------
[float]
=== Custom Boost Factor Query
See
link:{ref}/query-dsl-custom-boost-factor-query.html[Custom
Boost Factor Query]
[source,java]
--------------------------------------------------
QueryBuilders.customBoostFactorQuery(QueryBuilders.matchAllQuery()) // Your query
.boostFactor(3.1f);
--------------------------------------------------
[float]
=== Constant Score Query
See link:{ref}/query-dsl-constant-score-query.html[Constant
Score Query]
[source,java]
--------------------------------------------------
// Using with Filters
QueryBuilders.constantScoreQuery(FilterBuilders.termFilter("name","kimchy"))
.boost(2.0f);
// With Queries
QueryBuilders.constantScoreQuery(QueryBuilders.termQuery("name","kimchy"))
.boost(2.0f);
--------------------------------------------------
[float]
=== Disjunction Max Query
See link:{ref}/query-dsl-dis-max-query.html[Disjunction Max
Query]
[source,java]
--------------------------------------------------
QueryBuilders.disMaxQuery()
.add(QueryBuilders.termQuery("name","kimchy")) // Your queries
.add(QueryBuilders.termQuery("name","elasticsearch")) // Your queries
.boost(1.2f)
.tieBreaker(0.7f);
--------------------------------------------------
[float]
=== Field Query
See link:{ref}/query-dsl-field-query.html[Field Query]
[source,java]
--------------------------------------------------
QueryBuilders.fieldQuery("name", "+kimchy -dadoonet");
// Note that you can write the same query using queryString query.
QueryBuilders.queryString("+kimchy -dadoonet").field("name");
--------------------------------------------------
[float]
=== Fuzzy Like This (Field) Query (flt and flt_field)
See:
* link:{ref}/query-dsl-flt-query.html[Fuzzy Like This Query]
* link:{ref}/query-dsl-flt-field-query.html[Fuzzy Like This Field Query]
[source,java]
--------------------------------------------------
// flt Query
QueryBuilders.fuzzyLikeThisQuery("name.first", "name.last") // Fields
.likeText("text like this one") // Text
.maxQueryTerms(12); // Max num of Terms
// in generated queries
// flt_field Query
QueryBuilders.fuzzyLikeThisFieldQuery("name.first") // Only on single field
.likeText("text like this one")
.maxQueryTerms(12);
--------------------------------------------------
[float]
=== FuzzyQuery
See link:{ref}/query-dsl-fuzzy-query.html[Fuzzy Query]
[source,java]
--------------------------------------------------
QueryBuilder qb = QueryBuilders.fuzzyQuery("name", "kimzhy");
--------------------------------------------------
[float]
=== Has Child / Has Parent
See:
* link:{ref}/query-dsl-has-child-query.html[Has Child Query]
* link:{ref}/query-dsl-has-parent-query.html[Has Parent]
[source,java]
--------------------------------------------------
// Has Child
QueryBuilders.hasChildQuery("blog_tag",
QueryBuilders.termQuery("tag","something"))
// Has Parent
QueryBuilders.hasParentQuery("blog",
QueryBuilders.termQuery("tag","something"));
--------------------------------------------------
[float]
=== MatchAll Query
See link:{ref}/query-dsl-match-all-query.html[Match All
Query]
[source,java]
--------------------------------------------------
QueryBuilder qb = QueryBuilders.matchAllQuery();
--------------------------------------------------
[float]
=== Fuzzy Like This (Field) Query (flt and flt_field)
See:
* link:{ref}/query-dsl-mlt-query.html[More Like This Query]
* link:{ref}/query-dsl-mlt-field-query.html[More Like This Field Query]
[source,java]
--------------------------------------------------
// mlt Query
QueryBuilders.moreLikeThisQuery("name.first", "name.last") // Fields
.likeText("text like this one") // Text
.minTermFreq(1) // Ignore Threshold
.maxQueryTerms(12); // Max num of Terms
// in generated queries
// mlt_field Query
QueryBuilders.moreLikeThisFieldQuery("name.first") // Only on single field
.likeText("text like this one")
.minTermFreq(1)
.maxQueryTerms(12);
--------------------------------------------------
[float]
=== Prefix Query
See link:{ref}/query-dsl-prefix-query.html[Prefix Query]
[source,java]
--------------------------------------------------
QueryBuilders.prefixQuery("brand", "heine");
--------------------------------------------------
[float]
=== QueryString Query
See link:{ref}/query-dsl-query-string-query.html[QueryString Query]
[source,java]
--------------------------------------------------
QueryBuilder qb = QueryBuilders.queryString("+kimchy -elasticsearch");
--------------------------------------------------
[float]
=== Range Query
See link:{ref}/query-dsl-range-query.html[Range Query]
[source,java]
--------------------------------------------------
QueryBuilder qb = QueryBuilders
.rangeQuery("price")
.from(5)
.to(10)
.includeLower(true)
.includeUpper(false);
--------------------------------------------------
[float]
=== Span Queries (first, near, not, or, term)
See:
* link:{ref}/query-dsl-span-first-query.html[Span First Query]
* link:{ref}/query-dsl-span-near-query.html[Span Near Query]
* link:{ref}/query-dsl-span-not-query.html[Span Not Query]
* link:{ref}/query-dsl-span-or-query.html[Span Or Query]
* link:{ref}/query-dsl-span-term-query.html[Span Term Query]
[source,java]
--------------------------------------------------
// Span First
QueryBuilders.spanFirstQuery(
QueryBuilders.spanTermQuery("user", "kimchy"), // Query
3 // Max End position
);
// Span Near
QueryBuilders.spanNearQuery()
.clause(QueryBuilders.spanTermQuery("field","value1")) // Span Term Queries
.clause(QueryBuilders.spanTermQuery("field","value2"))
.clause(QueryBuilders.spanTermQuery("field","value3"))
.slop(12) // Slop factor
.inOrder(false)
.collectPayloads(false);
// Span Not
QueryBuilders.spanNotQuery()
.include(QueryBuilders.spanTermQuery("field","value1"))
.exclude(QueryBuilders.spanTermQuery("field","value2"));
// Span Or
QueryBuilders.spanOrQuery()
.clause(QueryBuilders.spanTermQuery("field","value1"))
.clause(QueryBuilders.spanTermQuery("field","value2"))
.clause(QueryBuilders.spanTermQuery("field","value3"));
// Span Term
QueryBuilders.spanTermQuery("user","kimchy");
--------------------------------------------------
[float]
=== Term Query
See link:{ref}/query-dsl-term-query.html[Term Query]
[source,java]
--------------------------------------------------
QueryBuilder qb = QueryBuilders.termQuery("name", "kimchy");
--------------------------------------------------
[float]
=== Terms Query
See link:{ref}/query-dsl-terms-query.html[Terms Query]
[source,java]
--------------------------------------------------
QueryBuilders.termsQuery("tags", // field
"blue", "pill") // values
.minimumMatch(1); // How many terms must match
--------------------------------------------------
[float]
=== Top Children Query
See link:{ref}/query-dsl-top-children-query.html[Top Children Query]
[source,java]
--------------------------------------------------
QueryBuilders.topChildrenQuery(
"blog_tag", // field
QueryBuilders.termQuery("tag", "something") // Query
)
.score("max") // max, sum or avg
.factor(5)
.incrementalFactor(2);
--------------------------------------------------
[float]
=== Wildcard Query
See link:{ref}/query-dsl-wildcard-query.html[Wildcard Query]
[source,java]
--------------------------------------------------
QueryBuilders.wildcardQuery("user", "k?mc*");
--------------------------------------------------
[float]
=== Nested Query
See link:{ref}/query-dsl-nested-query.html[Nested Query]
[source,java]
--------------------------------------------------
QueryBuilders.nestedQuery("obj1", // Path
QueryBuilders.boolQuery() // Your query
.must(QueryBuilders.matchQuery("obj1.name", "blue"))
.must(QueryBuilders.rangeQuery("obj1.count").gt(5))
)
.scoreMode("avg"); // max, total, avg or none
--------------------------------------------------
[float]
=== Custom Filters Score Query
See
link:{ref}/query-dsl-custom-filters-score-query.html[Custom Filters Score Query]
[source,java]
--------------------------------------------------
QueryBuilders.customFiltersScoreQuery(
QueryBuilders.matchAllQuery()) // Query
// Filters with their boost factors
.add(FilterBuilders.rangeFilter("age").from(0).to(10), 3)
.add(FilterBuilders.rangeFilter("age").from(10).to(20), 2)
.scoreMode("first"); // first, min, max, total, avg or multiply
--------------------------------------------------
[float]
=== Indices Query
See link:{ref}/query-dsl-indices-query.html[Indices Query]
[source,java]
--------------------------------------------------
// Using another query when no match for the main one
QueryBuilders.indicesQuery(
QueryBuilders.termQuery("tag", "wow"),
"index1", "index2"
)
.noMatchQuery(QueryBuilders.termQuery("tag", "kow"));
// Using all (match all) or none (match no documents)
QueryBuilders.indicesQuery(
QueryBuilders.termQuery("tag", "wow"),
"index1", "index2"
)
.noMatchQuery("all"); // all or none
--------------------------------------------------
[float]
=== GeoShape Query
See link:{ref}/query-dsl-geo-shape-query.html[GeoShape Query]
Note: the `geo_shape` type uses `Spatial4J` and `JTS`, both of which are
optional dependencies. Consequently you must add `Spatial4J` and `JTS`
to your classpath in order to use this type:
[source,java]
--------------------------------------------------
<dependency>
<groupId>com.spatial4j</groupId>
<artifactId>spatial4j</artifactId>
<version>0.3</version>
</dependency>
<dependency>
<groupId>com.vividsolutions</groupId>
<artifactId>jts</artifactId>
<version>1.12</version>
<exclusions>
<exclusion>
<groupId>xerces</groupId>
<artifactId>xercesImpl</artifactId>
</exclusion>
</exclusions>
</dependency>
--------------------------------------------------
[source,java]
--------------------------------------------------
// Import Spatial4J shapes
import com.spatial4j.core.context.SpatialContext;
import com.spatial4j.core.shape.Shape;
import com.spatial4j.core.shape.impl.RectangleImpl;
// Also import ShapeRelation
import org.elasticsearch.common.geo.ShapeRelation;
--------------------------------------------------
[source,java]
--------------------------------------------------
// Shape within another
QueryBuilders.geoShapeQuery("location",
new RectangleImpl(0,10,0,10,SpatialContext.GEO))
.relation(ShapeRelation.WITHIN);
// Intersect shapes
QueryBuilders.geoShapeQuery("location",
new PointImpl(0, 0, SpatialContext.GEO))
.relation(ShapeRelation.INTERSECTS);
// Using pre-indexed shapes
QueryBuilders.geoShapeQuery("location", "New Zealand", "countries")
.relation(ShapeRelation.DISJOINT);
--------------------------------------------------

View file

@ -0,0 +1,137 @@
[[search]]
== Search API
The search API allows to execute a search query and get back search hits
that match the query. It can be executed across one or more indices and
across one or more types. The query can either be provided using the
<<query-dsl-queries,query Java API>> or
the <<query-dsl-filters,filter Java API>>.
The body of the search request is built using the
`SearchSourceBuilder`. Here is an example:
[source,java]
--------------------------------------------------
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.search.SearchType;
import org.elasticsearch.index.query.FilterBuilders.*;
import org.elasticsearch.index.query.QueryBuilders.*;
--------------------------------------------------
[source,java]
--------------------------------------------------
SearchResponse response = client.prepareSearch("index1", "index2")
.setTypes("type1", "type2")
.setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
.setQuery(QueryBuilders.termQuery("multi", "test")) // Query
.setFilter(FilterBuilders.rangeFilter("age").from(12).to(18)) // Filter
.setFrom(0).setSize(60).setExplain(true)
.execute()
.actionGet();
--------------------------------------------------
Note that all parameters are optional. Here is the smallest search call
you can write:
[source,java]
--------------------------------------------------
// MatchAll on the whole cluster with all default options
SearchResponse response = client.prepareSearch().execute().actionGet();
--------------------------------------------------
For more information on the search operation, check out the REST
link:{ref}/search.html[search] docs.
[float]
=== Using scrolls in Java
Read the link:{ref}/search-request-scroll.html[scroll documentation]
first!
[source,java]
--------------------------------------------------
import static org.elasticsearch.index.query.FilterBuilders.*;
import static org.elasticsearch.index.query.QueryBuilders.*;
QueryBuilder qb = termQuery("multi", "test");
SearchResponse scrollResp = client.prepareSearch(test)
.setSearchType(SearchType.SCAN)
.setScroll(new TimeValue(60000))
.setQuery(qb)
.setSize(100).execute().actionGet(); //100 hits per shard will be returned for each scroll
//Scroll until no hits are returned
while (true) {
scrollResp = client.prepareSearchScroll(scrollResp.getScrollId()).setScroll(new TimeValue(600000)).execute().actionGet();
for (SearchHit hit : scrollResp.getHits()) {
//Handle the hit...
}
//Break condition: No hits are returned
if (scrollResp.hits().hits().length == 0) {
break;
}
}
--------------------------------------------------
[float]
=== Operation Threading
The search API allows to set the threading model the operation will be
performed when the actual execution of the API is performed on the same
node (the API is executed on a shard that is allocated on the same
server).
There are three threading modes.The `NO_THREADS` mode means that the
search operation will be executed on the calling thread. The
`SINGLE_THREAD` mode means that the search operation will be executed on
a single different thread for all local shards. The `THREAD_PER_SHARD`
mode means that the search operation will be executed on a different
thread for each local shard.
The default mode is `SINGLE_THREAD`.
[float]
=== MultiSearch API
See link:{ref}/search-multi-search.html[MultiSearch API Query]
documentation
[source,java]
--------------------------------------------------
SearchRequestBuilder srb1 = node.client()
.prepareSearch().setQuery(QueryBuilders.queryString("elasticsearch")).setSize(1);
SearchRequestBuilder srb2 = node.client()
.prepareSearch().setQuery(QueryBuilders.matchQuery("name", "kimchy")).setSize(1);
MultiSearchResponse sr = node.client().prepareMultiSearch()
.add(srb1)
.add(srb2)
.execute().actionGet();
// You will get all individual responses from MultiSearchResponse#responses()
long nbHits = 0;
for (MultiSearchResponse.Item item : sr.responses()) {
SearchResponse response = item.response();
nbHits += response.hits().totalHits();
}
--------------------------------------------------
[float]
=== Using Facets
The following code shows how to add two facets within your search:
[source,java]
--------------------------------------------------
SearchResponse sr = node.client().prepareSearch()
.setQuery(QueryBuilders.matchAllQuery())
.addFacet(FacetBuilders.termsFacet("f1").field("field"))
.addFacet(FacetBuilders.dateHistogramFacet("f2").field("birth").interval("year"))
.execute().actionGet();
// Get your facet results
TermsFacet f1 = (TermsFacet) sr.facets().facetsAsMap().get("f1");
DateHistogramFacet f2 = (DateHistogramFacet) sr.facets().facetsAsMap().get("f2");
--------------------------------------------------
See <<facets,Facets Java API>>
documentation for details.