I have two telegraf daemons running.
Daemon 1 : Input = kafka topic : sample_topic, Output = InfluxDb : DB = telegraf, MEASUREMENT = KI1
Daemon 2 : Input = kafka topic : sample_topic2, Output = InfluxDb : DB = telegraf, MEASUREMENT = KI2
The two daemons read different kafka topics and writes to two different measurements in InfluxDB database "telegraf"
What I observe is that both the measurements KI1 and KI2 are not created simultaneously. Only one measurement is created. When I kill the daemon which has already created the measurement, then the other measurement gets created in the db by the other daemon.
Is simultaneous writes to different measurement allowed in InfluxDb?
I even tried the same by writing to two different InfluxDb databases, telegraf and telegraf2. I observed the same behavior.
Also, is it possible to do all this using only one daemon? Where in I have multiple input plugins to read different kafka topics and different output plugins to write where ever required.
Daemon 1 :
[tags]
topic = "sample_topic"
# OUTPUTS
[outputs]
[outputs.influxdb]
# The full HTTP endpoint URL for your InfluxDB instance
url = "http://localhost:8086" # EDIT THIS LINE
# The target database for metrics. This database must already exist
database = "telegraf" # required.
skip_database_creation = true
database_tag = "KI1"
#INPUTS
# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer_legacy]]
name_override = "KI1"
## topic(s) to consume
topics = ["sample_topic"]
## an array of Zookeeper connection strings
zookeeper_peers = ["localhost:2181"]
## Zookeeper Chroot
zookeeper_chroot = ""
## the name of the consumer group
consumer_group = "sample"
## Offset (must be either "oldest" or "newest")
offset = "oldest"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "value"
data_type = "string"
## Maximum length of a message to consume, in bytes (default 0/unlimited);
## larger messages are dropped
max_message_len = 65536
Daemon 2 :
[tags]
topic = "sample_topic2"
# OUTPUTS
[outputs]
[outputs.influxdb]
# The full HTTP endpoint URL for your InfluxDB instance
url = "http://localhost:8086" # EDIT THIS LINE
# The target database for metrics. This database must already exist
database = "telegraf" # required.
skip_database_creation = true
database_tag = "KI2"
#INPUTS
# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer_legacy]]
name_override = "KI2"
## topic(s) to consume
topics = ["sample_topic2"]
## an array of Zookeeper connection strings
zookeeper_peers = ["localhost:2181"]
## Zookeeper Chroot
zookeeper_chroot = ""
## the name of the consumer group
consumer_group = "sample"
## Offset (must be either "oldest" or "newest")
offset = "oldest"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "value"
data_type = "string"
## Maximum length of a message to consume, in bytes (default 0/unlimited);
## larger messages are dropped
max_message_len = 65536
I don't know how did you verify it. But chances are rare for such an anomaly.
Yes.