For other versions, see theVersioned plugin docs.
For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github.For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Starting with Elasticsearch 5.3, there’s an HTTP settingcalled http.content_type.required
. If this option is set to true
, and youare using Logstash 2.4 through 5.2, you need to update the Elasticsearch inputplugin to version 4.0.2 or higher.
Read from an Elasticsearch cluster, based on search query results.This is useful for replaying test logs, reindexing, etc.You can periodically schedule ingestion using a cron syntax(see schedule
setting) or run the query one time to loaddata into Logstash.
Example:
input { # Read all documents from Elasticsearch matching the given query elasticsearch { hosts => "localhost" query => '{ "query": { "match": { "statuscode": 200 } }, "sort": [ "_doc" ] }' }}
This would create an Elasticsearch query with the following format:
curl 'http://localhost:9200/logstash-*/_search?&scroll=1m&size=1000' -d '{ "query": { "match": { "statuscode": 200 } }, "sort": [ "_doc" ]}'
Input from this plugin can be scheduled to run periodically according to a specificschedule. This scheduling syntax is powered by rufus-scheduler.The syntax is cron-like with some extensions specific to Rufus (e.g. timezone support ).
Examples:
|
will execute every minute of 5am every day of January through March. |
|
will execute on the 0th minute of every hour every day. |
|
will execute at 6:00am (UTC/GMT -5) every day. |
Further documentation describing this syntax can be foundhere.
This plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
a valid filesystem path |
No |
|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
Also see Common Options for a list of options supported by allinput plugins.
SSL Certificate Authority file in PEM encoded format, must alsoinclude any chain certificates as necessary.
false
If set, include Elasticsearch document information such as index, type, andthe id in the event.
It might be important to note, with regards to metadata, that if you’reingesting documents with the intent to re-index them (or just update them)that the action
option in the elasticsearch output wants to know how tohandle those things. It can be dynamically assigned with a fieldadded to the metadata.
Example
input { elasticsearch { hosts => "es.production.mysite.org" index => "mydata-2018.09.*" query => '{ "query": { "query_string": { "query": "*" } } }' size => 500 scroll => "5m" docinfo => true }}output { elasticsearch { index => "copy-of-production.%{[@metadata][_index]}" document_type => "%{[@metadata][_type]}" document_id => "%{[@metadata][_id]}" }}
Starting with Logstash 6.0, the document_type
option isdeprecated due to theremoval of types in Logstash 6.0.It will be removed in the next major version of Logstash.
["_index", "_type", "_id"]
If document metadata storage is requested by enabling the docinfo
option, this option lists the metadata fields to save in the currentevent. SeeDocument Metadatain the Elasticsearch documentation for more information.
"@metadata"
If document metadata storage is requested by enabling the docinfo
option, this option names the field under which to store the metadatafields as subfields.
List of one or more Elasticsearch hosts to use for querying. Each hostcan be either IP, HOST, IP:port, or HOST:port. The port defaults to9200.
"logstash-*"
The index or alias to search. SeeMulti Indices documentationin the Elasticsearch documentation for more information on how to referencemultiple indices.
The password to use together with the username in the user
optionwhen authenticating to the Elasticsearch server. If set to an emptystring authentication will be disabled.
'{ "sort": [ "_doc" ] }'
The query to be executed. Read theElasticsearch query DSL documentationfor more information.
Schedule of when to periodically run statement, in Cron formatfor example: "* * * * *" (execute query every minute, on the minute)
There is no schedule by default. If no schedule is given, then the statement is runexactly once.
"1m"
This parameter controls the keepalive time in seconds of the scrollingrequest and initiates the scrolling process. The timeout applies perround trip (i.e. between the previous scroll request, to the next).
1000
This allows you to set the maximum number of hits returned per scroll.
In some cases, it is possible to improve overall throughput by consuming multipledistinct slices of a query simultaneously using theSliced Scroll API,especially if the pipeline is spending significant time waiting on Elasticsearchto provide results.
If set, the slices
parameter tells the plugin how many slices to divide the workinto, and will produce events from the slices in parallel until all of them are donescrolling.
The Elasticsearch manual indicates that there can be negative performance implications to both the query and the Elasticsearch cluster when a scrolling query uses more slices than shards in the index.
If the slices
parameter is left unset, the plugin will not inject sliceinstructions into the query.
false
If enabled, SSL will be used when communicating with the Elasticsearchserver (i.e. HTTPS will be used instead of plain HTTP).
The username to use together with the password in the password
option when authenticating to the Elasticsearch server. If set to anempty string authentication will be disabled.
The following configuration options are supported by all input plugins:
"json"
The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
true
Disable or enable metric logging for this specific plugin instanceby default we record all the metrics we can, but you can disable metrics collectionfor a specific plugin.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.It is strongly recommended to set this ID in your configuration. This is particularly usefulwhen you have two or more plugins of the same type, for example, if you have 2 elasticsearch inputs.Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
input { elasticsearch { id => "my_plugin_id" }}
Add any number of arbitrary tags to your event.
This can help with processing later.
Add a type
field to all events handled by this input.
Types are used mainly for filter activation.
The type is stored as part of the event itself, so you canalso use the type to search for it in Kibana.
If you try to set a type on an event that already has one (forexample when you send an event from a shipper to an indexer) thena new input will not override the existing type. A type set atthe shipper stays with that event for its life evenwhen sent to another Logstash server.