For other versions, see theVersioned plugin docs.
For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github.For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Starting with Elasticsearch 5.3, there’s an HTTP settingcalled http.content_type.required
. If this option is set to true
, and youare using Logstash 2.4 through 5.2, you need to update the Elasticsearch filterplugin to version 3.1.1 or higher.
Search Elasticsearch for a previous log event and copy some fields from itinto the current event. Below are two complete examples of how this filter mightbe used.
The first example uses the legacy query parameter where the user is limited to an Elasticsearch query_string.Whenever logstash receives an "end" event, it uses this elasticsearchfilter to find the matching "start" event based on some operation identifier.Then it copies the @timestamp
field from the "start" event into a new field onthe "end" event. Finally, using a combination of the "date" filter and the"ruby" filter, we calculate the time duration in hours between the two events.
if [type] == "end" { elasticsearch { hosts => ["es-server"] query => "type:start AND operation:%{[opid]}" fields => { "@timestamp" => "started" } } date { match => ["[started]", "ISO8601"] target => "[started]" } ruby { code => "event.set('duration_hrs', (event.get('@timestamp') - event.get('started')) / 3600)" }}
The example below reproduces the above example but utilises the query_template.This query_template represents a full Elasticsearch query DSL and supports thestandard Logstash field substitution syntax. The example below issuesthe same query as the first example but uses the template shown.
if [type] == "end" { elasticsearch { hosts => ["es-server"] query_template => "template.json" fields => { "@timestamp" => "started" } } date { match => ["[started]", "ISO8601"] target => "[started]" } ruby { code => "event.set('duration_hrs', (event.get('@timestamp') - event.get('started')) / 3600)" }}
template.json:
{ "size": 1, "sort" : [ { "@timestamp" : "desc" } ], "query": { "query_string": { "query": "type:start AND operation:%{[opid]}" } }, "_source": ["@timestamp"]}
As illustrated above, through the use of opid, fields from the Logstashevents can be referenced within the template.The template will be populated per event prior to being used to query Elasticsearch.
Notice also that when you use query_template
, the Logstash attributes result_size
and sort
will be ignored. They should be specified directly in the JSONtemplate, as shown in the example above.
This plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
No |
||
a valid filesystem path |
No |
|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
Also see Common Options for a list of options supported by allfilter plugins.
{}
Hash of aggregation names to copy from elasticsearch response into Logstash event fields
Example:
filter { elasticsearch { aggregation_fields => { "my_agg_name" => "my_ls_field" } }}
SSL Certificate Authority file
{}
Hash of docinfo fields to copy from old event (found via elasticsearch) into new event
Example:
filter { elasticsearch { docinfo_fields => { "_id" => "document_id" "_index" => "document_index" } }}
{}
An array of fields to copy from the old event (found via elasticsearch) into thenew event, currently being processed.
In the following example, the values of @timestamp
and event_id
on the eventfound via elasticsearch are copied to the current event’sstarted
and start_id
fields, respectively:
fields => { "@timestamp" => "started" "event_id" => "start_id"}
["localhost:9200"]
List of elasticsearch hosts to use for querying.
""
Comma-delimited list of index names to search; use _all
or empty string to perform the operation on all indices.Field substitution (e.g. index-name-%{date_field}
) is available
Elasticsearch query string. Read the Elasticsearch query string documentation.for more info at: https://www.elastic.co/guide/en/elasticsearch/reference/master/query-dsl-query-string-query.html#query-string-syntax
File path to elasticsearch query in DSL format. Read the Elasticsearch query documentationfor more info at: https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html
"@timestamp:desc"
Comma-delimited list of <field>:<direction>
pairs that define the sort order
["_elasticsearch_lookup_failure"]
Tags the event on failure to look up previous log event information. This can be used in later analysis.
The following configuration options are supported by all filter plugins:
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
{}
If this filter is successful, add any arbitrary fields to this event.Field names can be dynamic and include parts of the event using the %{field}
.
Example:
filter { elasticsearch { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" } }}
# You can also add multiple fields at once:filter { elasticsearch { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" "new_field" => "new_static_value" } }}
If the event has field "somefield" == "hello"
this filter, on success,would add field foo_hello
if it is present, with thevalue above and the %{host}
piece replaced with that value from theevent. The second example would also add a hardcoded field.
[]
If this filter is successful, add arbitrary tags to the event.Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { elasticsearch { add_tag => [ "foo_%{somefield}" ] }}
# You can also add multiple tags at once:filter { elasticsearch { add_tag => [ "foo_%{somefield}", "taggedy_tag"] }}
If the event has field "somefield" == "hello"
this filter, on success,would add a tag foo_hello
(and the second example would of course add a taggedy_tag
tag).
true
Disable or enable metric logging for this specific plugin instanceby default we record all the metrics we can, but you can disable metrics collectionfor a specific plugin.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.It is strongly recommended to set this ID in your configuration. This is particularly usefulwhen you have two or more plugins of the same type, for example, if you have 2 elasticsearch filters.Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
filter { elasticsearch { id => "ABC" }}
false
Call the filter flush method at regular interval.Optional.
[]
If this filter is successful, remove arbitrary fields from this event.Example:
filter { elasticsearch { remove_field => [ "foo_%{somefield}" ] }}
# You can also remove multiple fields at once:filter { elasticsearch { remove_field => [ "foo_%{somefield}", "my_extraneous_field" ] }}
If the event has field "somefield" == "hello"
this filter, on success,would remove the field with name foo_hello
if it is present. The secondexample would remove an additional, non-dynamic field.
[]
If this filter is successful, remove arbitrary tags from the event.Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { elasticsearch { remove_tag => [ "foo_%{somefield}" ] }}
# You can also remove multiple tags at once:filter { elasticsearch { remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"] }}
If the event has field "somefield" == "hello"
this filter, on success,would remove the tag foo_hello
if it is present. The second examplewould remove a sad, unwanted tag as well.