For other versions, see theVersioned plugin docs.
For plugins not bundled by default, it is easy to install by running bin/logstash-plugin install logstash-input-google_cloud_storage
. See Working with plugins for more details.
For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github.For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Extracts events from files in a Google Cloud Storage bucket.
Example use-cases:
Note: While this project is partially maintained by Google, this is not an official Google product.
The plugin exposes several metadata attributes about the object being read.You can access these later in the pipeline to augment the data or perform conditional logic.
Key | Type | Description |
---|---|---|
|
|
The name of the bucket the file was read from. |
|
|
The name of the object. |
|
|
A map of metadata on the object. |
|
|
MD5 hash of the data. Encoded using base64. |
|
|
CRC32c checksum, as described in RFC 4960. Encoded using base64 in big-endian byte order. |
|
|
The content generation of the object. Used for object versioning |
|
|
The position of the event in the file. 1 indexed. |
|
|
A deterministic, unique ID describing this line. This lets you do idempotent inserts into Elasticsearch. |
More information about object metadata can be found in theofficial documentation.
Basic configuration to read JSON logs every minute from my-logs-bucket
.For example, Stackdriver logs.
input { google_cloud_storage { interval => 60 bucket_id => "my-logs-bucket" json_key_file => "/home/user/key.json" file_matches => ".*json" codec => "json_lines" }}output { stdout { codec => rubydebug } }
If your pipeline might insert the same file multiple times you can use the line_id
metadata key as a deterministic id.
The ID has the format: gs://<bucket_id>/<object_id>:<line_num>@<generation>
.line_num
represents the nth event deserialized from the file starting at 1.generation
is a unique id Cloud Storage generates for the object.When an object is overwritten it gets a new generation.
input { google_cloud_storage { bucket_id => "batch-jobs-output" }}output { elasticsearch { document_id => "%{[@metadata][gcs][line_id]}" }}
Extract data from Cloud Storage, transform it with Logstash and load it into BigQuery.
input { google_cloud_storage { interval => 60 bucket_id => "batch-jobs-output" file_matches => "purchases.*.csv" json_key_file => "/home/user/key.json" codec => "plain" }}filter { csv { columns => ["transaction", "sku", "price"] convert => { "transaction" => "integer" "price" => "float" } }}output { google_bigquery { project_id => "my-project" dataset => "logs" csv_schema => "transaction:INTEGER,sku:INTEGER,price:FLOAT" json_key_file => "/path/to/key.json" error_directory => "/tmp/bigquery-errors" ignore_unknown_values => true }}
This plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
Yes |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
Also see Common Options for a list of options supported by allinput plugins.
The bucket containing your log files.
The path to the key to authenticate your user to the bucket.This service user should have the storage.objects.update
permission so it cancreate metadata on the object preventing it from being scanned multiple times.
60
The number of seconds between looking for new files in your bucket.
.*\.log(\.gz)?
A regex pattern to filter files. Only files with names matching this will be considered.All files match by default.
^$
Any files matching this regex are excluded from processing.No files are excluded by default.
x-goog-meta-ls-gcs-input
This key will be set on the objects after they’ve been processed by the plugin. That way you canstop the plugin and not upload files again or prevent them from being uploaded by setting thefield manually.
the key is a flag, if a file was partially processed before Logstash exited some events will be resent.
LOGSTASH_DATA/plugins/inputs/google_cloud_storage/db
.If set, the plugin will store the list of processed files locally.This allows you to create a service account for the plugin that does not have write permissions.However, the data will not be shared across multiple running instances of Logstash.
false
Should the log file be deleted after its contents have been updated?
true
If set to true
, files ending in .gz
are decompressed before they’re parsed by the codec.The file will be skipped if it has the suffix, but can’t be opened as a gzip, e.g.if it has a bad magic number.
The following configuration options are supported by all input plugins:
"plain"
The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
true
Disable or enable metric logging for this specific plugin instanceby default we record all the metrics we can, but you can disable metrics collectionfor a specific plugin.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.It is strongly recommended to set this ID in your configuration. This is particularly usefulwhen you have two or more plugins of the same type, for example, if you have 2 google_cloud_storage inputs.Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
input { google_cloud_storage { id => "my_plugin_id" }}
Add any number of arbitrary tags to your event.
This can help with processing later.
Add a type
field to all events handled by this input.
Types are used mainly for filter activation.
The type is stored as part of the event itself, so you canalso use the type to search for it in Kibana.
If you try to set a type on an event that already has one (forexample when you send an event from a shipper to an indexer) thena new input will not override the existing type. A type set atthe shipper stays with that event for its life evenwhen sent to another Logstash server.