Merge branch 'master' into enhancement/remove_node_client_setting

This commit is contained in:
javanna 2016-03-21 17:18:23 +01:00 committed by Luca Cavanna
commit bf390a935e
1348 changed files with 37009 additions and 31476 deletions

View file

@ -12,12 +12,16 @@ Obtaining an elasticsearch `Client` is simple. The most common way to
get a client is by creating a <<transport-client,`TransportClient`>>
that connects to a cluster.
*Important:*
______________________________________________________________________________________________________________________________________________________________
Please note that you are encouraged to use the same version on client
and cluster sides. You may hit some incompatibility issues when mixing
major versions.
______________________________________________________________________________________________________________________________________________________________
[IMPORTANT]
==============================
The client must have the same major version (e.g. `2.x`, or `5.x`) as the
nodes in the cluster. Clients may connect to clusters which have a different
minor version (e.g. `2.3.x`) but it is possible that new funcionality may not
be supported. Ideally, the client should have the same version as the
cluster.
==============================
[[transport-client]]
@ -53,11 +57,23 @@ Client client = TransportClient.builder().settings(settings).build();
//Add transport addresses and do something with the client...
--------------------------------------------------
The client allows sniffing the rest of the cluster, which adds data nodes
into its list of machines to use. In this case, note that the IP addresses
used will be the ones that the other nodes were started with (the
"publish" address). In order to enable it, set the
`client.transport.sniff` to `true`:
The Transport client comes with a cluster sniffing feature which
allows it to dynamically add new hosts and remove old ones.
When sniffing is enabled the the transport client will connect to the nodes in its
internal node list, which is built via calls to addTransportAddress.
After this, the client will call the internal cluster state API on those nodes
to discover available data nodes. The internal node list of the client will
be replaced with those data nodes only. This list is refreshed every five seconds by default.
Note that the IP addresses the sniffer connects to are the ones declared as the 'publish'
address in those node's elasticsearch config.
Keep in mind that list might possibly not include the original node it connected to
if that node is not a data node. If, for instance, you initially connect to a
master node, after sniffing no further requests will go to that master node,
but rather to any data nodes instead. The reason the transport excludes non-data
nodes is to avoid sending search traffic to master only nodes.
In order to enable sniffing, set `client.transport.sniff` to `true`:
[source,java]
--------------------------------------------------

View file

@ -142,8 +142,6 @@ include::search.asciidoc[]
include::aggs.asciidoc[]
include::percolate.asciidoc[]
include::query-dsl.asciidoc[]
include::indexed-scripts.asciidoc[]

View file

@ -10,9 +10,9 @@ to your classpath in order to use this type:
[source,xml]
-----------------------------------------------
<dependency>
<groupId>com.spatial4j</groupId>
<groupId>org.locationtech.spatial4j</groupId>
<artifactId>spatial4j</artifactId>
<version>0.4.1</version> <1>
<version>0.6</version> <1>
</dependency>
<dependency>
@ -27,7 +27,7 @@ to your classpath in order to use this type:
</exclusions>
</dependency>
-----------------------------------------------
<1> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.spatial4j%22%20AND%20a%3A%22spatial4j%22[Maven Central]
<1> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.locationtech.spatial4j%22%20AND%20a%3A%22spatial4j%22[Maven Central]
<2> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.vividsolutions%22%20AND%20a%3A%22jts%22[Maven Central]
[source,java]

View file

@ -1,12 +1,8 @@
[[percolate]]
== Percolate API
[[java-query-percolator-query]]
==== Percolator query
The percolator allows one to register queries against an index, and then
send `percolate` requests which include a doc, getting back the
queries that match on that doc out of the set of registered queries.
Read the main {ref}/search-percolate.html[percolate]
documentation before reading this guide.
See:
* {ref}/query-dsl-percolator-query.html[Percolator Query]
[source,java]
--------------------------------------------------
@ -37,14 +33,12 @@ docBuilder.field("doc").startObject(); //This is needed to designate the documen
docBuilder.field("content", "This is amazing!");
docBuilder.endObject(); //End of the doc field
docBuilder.endObject(); //End of the JSON root object
//Percolate
PercolateResponse response = client.preparePercolate()
.setIndices("myIndexName")
.setDocumentType("myDocumentType")
.setSource(docBuilder).execute().actionGet();
// Percolate, by executing the percolator query in the query dsl:
SearchResponse response = client().prepareSearch("myIndexName")
.setQuery(QueryBuilders.percolatorQuery("myDocumentType", docBuilder.bytes()))
.get();
//Iterate over the results
for(PercolateResponse.Match match : response) {
//Handle the result which is the name of
//the query in the percolator
for(SearchHit hit : response.getHits()) {
// Percolator queries as hit
}
--------------------------------------------------

View file

@ -27,3 +27,5 @@ include::template-query.asciidoc[]
include::script-query.asciidoc[]
include::percolator-query.asciidoc[]