Jdbc_streaming filter plugin

  • Plugin version: v1.0.5
  • Released on: 2019-02-04
  • Changelog

For other versions, see theVersioned plugin docs.

Getting Help

For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github.For the list of Elastic supported plugins, please consult the Elastic Support Matrix.

Description

This filter executes a SQL query and store the result set in the fieldspecified as target.It will cache the results locally in an LRU cache with expiry

For example you can load a row based on an id from in the event

filter {  jdbc_streaming {    jdbc_driver_library => "/path/to/mysql-connector-java-5.1.34-bin.jar"    jdbc_driver_class => "com.mysql.jdbc.Driver"    jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase"    jdbc_user => "me"    jdbc_password => "secret"    statement => "select * from WORLD.COUNTRY WHERE Code = :code"    parameters => { "code" => "country_code"}    target => "country_details"  }}

Jdbc_streaming Filter Configuration Options

This plugin supports the following configuration options plus the Common Options described later.

Also see Common Options for a list of options supported by allfilter plugins.

 

cache_expiration

  • Value type is number
  • Default value is 5.0

The minimum number of seconds any entry should remain in the cache, defaults to 5 secondsA numeric value, you can use decimals for example { "cache_expiration" => 0.25 }If there are transient jdbc errors the cache will store empty results for a givenparameter set and bypass the jbdc lookup, this merges the default_hash into the event, untilthe cache entry expires, then the jdbc lookup will be tried again for the same parametersConversely, while the cache contains valid results any external problem that would causejdbc errors, will not be noticed for the cache_expiration period.

cache_size

  • Value type is number
  • Default value is 500

The maximum number of cache entries are stored, defaults to 500 entriesThe least recently used entry will be evicted

default_hash

  • Value type is hash
  • Default value is {}

Define a default object to use when lookup fails to return a matching row.ensure that the key names of this object match the columns from the statement

jdbc_connection_string

  • This is a required setting.
  • Value type is string
  • There is no default value for this setting.

JDBC connection string

jdbc_driver_class

  • This is a required setting.
  • Value type is string
  • There is no default value for this setting.

JDBC driver class to load, for example "oracle.jdbc.OracleDriver" or "org.apache.derby.jdbc.ClientDriver"

jdbc_driver_library

  • Value type is path
  • There is no default value for this setting.

Tentative of abstracting JDBC logic to a mixinfor potential reuse in other plugins (input/output)This method is called when someone includes this moduleAdd these methods to the base given.JDBC driver library path to third party driver library.

jdbc_password

  • Value type is password
  • There is no default value for this setting.

JDBC password

jdbc_user

  • Value type is string
  • There is no default value for this setting.

JDBC user

jdbc_validate_connection

  • Value type is boolean
  • Default value is false

Connection pool configuration.Validate connection before use.

jdbc_validation_timeout

  • Value type is number
  • Default value is 3600

Connection pool configuration.How often to validate a connection (in seconds)

parameters

  • Value type is hash
  • Default value is {}

Hash of query parameter, for example { "id" => "id_field" }

statement

  • This is a required setting.
  • Value type is string
  • There is no default value for this setting.

Statement to execute.To use parameters, use named parameter syntax, for example "SELECT * FROM MYTABLE WHERE ID = :id"

tag_on_default_use

  • Value type is array
  • Default value is ["_jdbcstreamingdefaultsused"]

Append values to the tags field if no record was found and default values were used

tag_on_failure

  • Value type is array
  • Default value is ["_jdbcstreamingfailure"]

Append values to the tags field if sql error occured

target

  • This is a required setting.
  • Value type is string
  • There is no default value for this setting.

Define the target field to store the extracted result(s)Field is overwritten if exists

use_cache

  • Value type is boolean
  • Default value is true

Enable or disable caching, boolean true or false, defaults to true

Common Options

The following configuration options are supported by all filter plugins:

add_field

  • Value type is hash
  • Default value is {}

If this filter is successful, add any arbitrary fields to this event.Field names can be dynamic and include parts of the event using the %{field}.

Example:

filter {  jdbc_streaming {    add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }  }}
# You can also add multiple fields at once:filter {  jdbc_streaming {    add_field => {      "foo_%{somefield}" => "Hello world, from %{host}"      "new_field" => "new_static_value"    }  }}

If the event has field "somefield" == "hello" this filter, on success,would add field foo_hello if it is present, with thevalue above and the %{host} piece replaced with that value from theevent. The second example would also add a hardcoded field.

add_tag

  • Value type is array
  • Default value is []

If this filter is successful, add arbitrary tags to the event.Tags can be dynamic and include parts of the event using the %{field}syntax.

Example:

filter {  jdbc_streaming {    add_tag => [ "foo_%{somefield}" ]  }}
# You can also add multiple tags at once:filter {  jdbc_streaming {    add_tag => [ "foo_%{somefield}", "taggedy_tag"]  }}

If the event has field "somefield" == "hello" this filter, on success,would add a tag foo_hello (and the second example would of course add a taggedy_tag tag).

enable_metric

  • Value type is boolean
  • Default value is true

Disable or enable metric logging for this specific plugin instanceby default we record all the metrics we can, but you can disable metrics collectionfor a specific plugin.

id

  • Value type is string
  • There is no default value for this setting.

Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one.It is strongly recommended to set this ID in your configuration. This is particularly usefulwhen you have two or more plugins of the same type, for example, if you have 2 jdbc_streaming filters.Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.

filter {  jdbc_streaming {    id => "ABC"  }}

periodic_flush

  • Value type is boolean
  • Default value is false

Call the filter flush method at regular interval.Optional.

remove_field

  • Value type is array
  • Default value is []

If this filter is successful, remove arbitrary fields from this event.Example:

filter {  jdbc_streaming {    remove_field => [ "foo_%{somefield}" ]  }}
# You can also remove multiple fields at once:filter {  jdbc_streaming {    remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]  }}

If the event has field "somefield" == "hello" this filter, on success,would remove the field with name foo_hello if it is present. The secondexample would remove an additional, non-dynamic field.

remove_tag

  • Value type is array
  • Default value is []

If this filter is successful, remove arbitrary tags from the event.Tags can be dynamic and include parts of the event using the %{field}syntax.

Example:

filter {  jdbc_streaming {    remove_tag => [ "foo_%{somefield}" ]  }}
# You can also remove multiple tags at once:filter {  jdbc_streaming {    remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]  }}

If the event has field "somefield" == "hello" this filter, on success,would remove the tag foo_hello if it is present. The second examplewould remove a sad, unwanted tag as well.