For other versions, see theVersioned plugin docs.
For plugins not bundled by default, it is easy to install by running bin/logstash-plugin install logstash-input-jms
. See Working with plugins for more details.
For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github.For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Read events from a Jms Broker. Supports both Jms Queues and Topics.
For more information about Jms, see http://docs.oracle.com/javaee/6/tutorial/doc/bncdq.htmlFor more information about the Ruby Gem used, see http://github.com/reidmorrison/jruby-jmsHere is a config example to pull from a queue: jms { include_header ⇒ false include_properties ⇒ false include_body ⇒ true use_jms_timestamp ⇒ false interval ⇒ 10 destination ⇒ "myqueue" pub-sub ⇒ false yaml_file ⇒ "~/jms.yml" yaml_section ⇒ "mybroker" }
This plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
No |
||
Yes |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
string, one of |
No |
|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
Also see Common Options for a list of options supported by allinput plugins.
Url to use when connecting to the JMS provider
Name of the destination queue or topic to use.
Name of JMS Provider Factory class
true
Include JMS Message Body in the eventSupports TextMessage, MapMessage and ByteMessageIf the JMS Message is a TextMessage or ByteMessage, then the value will be in the "message" field of the eventIf the JMS Message is a MapMessage, then all the key/value pairs will be added in the Hashmap of the eventStreamMessage and ObjectMessage are not supported
true
A JMS message has three parts : Message Headers (required) Message Properties (optional) Message Bodies (optional)You can tell the input plugin which parts should be included in the event produced by Logstash
Include JMS Message Header Field values in the event
true
Include JMS Message Properties Field values in the event
10
Polling interval in seconds.This is the time sleeping between asks to a consumed Queue.This parameter has non influence in the case of a subcribed Topic.
Mandatory if jndi lookup is being used,contains details on how to connect to JNDI server
Name of JNDI entry at which the Factory can be found
Password to use when connecting to the JMS provider
If you do not use an yaml configuration use either the factory or jndi_name.An optional array of Jar file names to load for the specifiedJMS provider. By using this option it is not necessaryto put all the JMS Provider specific jar files into thejava CLASSPATH prior to starting Logstash.
consumer
, async
, thread
"consumer"
Choose an implementation of the run block. Value can be either consumer, async or thread
Set the selector to use to get messages off the queue or topic
1
false
Convert the JMSTimestamp header field to the @timestamp value of the event
Username to connect to JMS provider with
Yaml config file section nameFor some known examples, see: [Example jms.yml](https://github.com/reidmorrison/jruby-jms/blob/master/examples/jms.yml)
The following configuration options are supported by all input plugins:
"plain"
The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
true
Disable or enable metric logging for this specific plugin instanceby default we record all the metrics we can, but you can disable metrics collectionfor a specific plugin.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.It is strongly recommended to set this ID in your configuration. This is particularly usefulwhen you have two or more plugins of the same type, for example, if you have 2 jms inputs.Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
input { jms { id => "my_plugin_id" }}
Add any number of arbitrary tags to your event.
This can help with processing later.
Add a type
field to all events handled by this input.
Types are used mainly for filter activation.
The type is stored as part of the event itself, so you canalso use the type to search for it in Kibana.
If you try to set a type on an event that already has one (forexample when you send an event from a shipper to an indexer) thena new input will not override the existing type. A type set atthe shipper stays with that event for its life evenwhen sent to another Logstash server.