InfluxDB Logs

Configuration Options

Required Options

bucket(required)

The destination bucket for writes into InfluxDB 2.

TypeSyntaxDefaultExample
stringliteral["vector-bucket","4d2225e4d3d49f75"]
database(required)

Sets the target database for the write into InfluxDB 1.

TypeSyntaxDefaultExample
stringliteral["vector-database","iot-store"]
endpoint(required)

The endpoint to send data to.

TypeSyntaxDefaultExample
stringliteral["http://localhost:8086/","https://us-west-2-1.aws.cloud1.influxdata.com","https://us-west-2-1.aws.cloud2.influxdata.com"]
org(required)

Specifies the destination organization for writes into InfluxDB 2.

TypeSyntaxDefaultExample
stringliteral["my-org","33f2cff0a28e5b63"]
token(required)

Authentication token for InfluxDB 2.

TypeSyntaxDefaultExample
stringliteral["${INFLUXDB_TOKEN}","ef8d5de700e7989468166c40fc8a0ccd"]
namespace(required)

A prefix that will be added to all logs names.

TypeSyntaxDefaultExample
stringliteral["service"]
inputs(required)

A list of upstream source or transform IDs. Wildcards (*) are supported.

See configuration for more info.

TypeSyntaxDefaultExample
arrayliteral["my-source-or-transform-id","prefix-*"]
encoding(required)

Configures the encoding specific sink behavior.

TypeSyntaxDefaultExample
hashliteral[]
type(required)

The component type. This is a required field for all components and tells Vector which component to use.

TypeSyntaxDefaultExample
stringliteral["influxdb_logs"]

Advanced Options

consistency(optional)

Sets the write consistency for the point for InfluxDB 1.

TypeSyntaxDefaultExample
stringliteral["any","one","quorum","all"]
password(optional)

Sets the password for authentication if you’ve enabled authentication for the write into InfluxDB 1.

TypeSyntaxDefaultExample
stringliteral["${INFLUXDB_PASSWORD}","influxdb4ever"]
retention_policy_name(optional)

Sets the target retention policy for the write into InfluxDB 1.

TypeSyntaxDefaultExample
stringliteral["autogen","one_day_only"]
tags(optional)

A set of additional fields that will be attached to each LineProtocol as a tag. Note: If the set of tag values has high cardinality this also increase cardinality in InfluxDB.

TypeSyntaxDefaultExample
arrayfield_path["field1","parent.child_field"]
buffer(optional)

Configures the sink specific buffer behavior.

TypeSyntaxDefaultExample
hashliteral[]
batch(optional)

Configures the sink batching behavior.

TypeSyntaxDefaultExample
hash[]
healthcheck(optional)

Health check options for the sink.

TypeSyntaxDefaultExample
hash[]
request(optional)

Configures the sink request behavior.

TypeSyntaxDefaultExample
hash[]
username(optional)

Sets the username for authentication if you’ve enabled authentication for the write into InfluxDB 1.

TypeSyntaxDefaultExample
stringliteral["todd","vector-source"]

How it Works

Mapping Log Fields

InfluxDB uses line protocol to write data points. It is a text-based format that provides the measurement, tag set, field set, and timestamp of a data point.

A Log Event event contains an arbitrary set of fields (key/value pairs) that describe the event.

The following matrix outlines how Log Event fields are mapped into InfluxDB Line Protocol:

| Field | Line Protocol | | |---------------|-------------------| | host | tag | | message | field | | source_type | tag | | timestamp | timestamp | | [custom-key] | field |

The default behavior can be overridden by a tags configuration.

State

This component is stateless, meaning its behavior is consistent across each input.

Health checks

Health checks ensure that the downstream service is accessible and ready to accept data. This check is performed upon sink initialization. If the health check fails an error will be logged and Vector will proceed to start.

Partitioning

Vector supports dynamic configuration values through a simple template syntax. If an option supports templating, it will be noted with a badge and you can use event fields to create dynamic values. For example:

[sinks.my-sink]
dynamic_option = "application={{ application_id }}"

In the above example, the application_id for each event will be used to partition outgoing data.

Rate limits & adapative concurrency

Buffers and batches

This component buffers & batches data as shown in the diagram above. You'll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.

Batches are flushed when 1 of 2 conditions are met:

  1. The batch age meets or exceeds the configured timeout_secs.
  2. The batch size meets or exceeds the configured max_size or max_events.

Buffers are controlled via the buffer.* options.

Retry policy

Vector will retry failed requests (status == 429, >= 500, and != 501). Other responses will not be retried. You can control the number of retry attempts and backoff rate with the request.retry_attempts and request.retry_backoff_secs options.