Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aggregator use drop_original, kafka input stops after consuming 1000 message #5651

Closed
LilyZWuZz opened this issue Mar 29, 2019 · 1 comment
Closed
Assignees
Labels
bug unexpected problem or unintended behavior
Milestone

Comments

@LilyZWuZz
Copy link

LilyZWuZz commented Mar 29, 2019

Relevant telegraf.conf:

[agent]
  interval = "10s"
  round_interval = true

  metric_batch_size = 1000
  metric_buffer_limit = 10000

  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "3s"
  precision = "s"

  debug = true
  quiet = false
  logfile = ""

  hostname = ""
  omit_hostname = true

# # Configuration for the Prometheus client to spawn
[[outputs.prometheus_client]]
#   ## Address to listen on
   listen = ":9126"
    collectors_exclude = ["gocollector", "process"]
    path = "/metrics"

[[inputs.kafka_consumer]]
  ## kafka servers
  brokers = [{{KAFKA_BROKER_ADDR_STR}}]
  ## topic(s) to consume
  topics = ["metric"]
  ## Add topic as tag if topic_tag is not empty
  # topic_tag = ""

  ## Optional Client id
  client_id = "telegraf"

  offset = "oldest"
  max_message_len = 1000000

  data_format = "json"
  tag_keys = [
    "reason",
    "host"
  ]

  json_name_key = "name"
  json_string_fields = ["site"]
  json_time_key = "timeStamp"
  json_time_format = "unix"
  json_timezone = "Local"

  [[aggregators.valuecounter]]
  period = "1s"
  drop_original = true
  fields = ["site"]

System info:

telegraf version: 1.10
evn: docker CentOS

Steps to reproduce:

  1. set config drop_original = false. Send json string to kafka like this.
    testStr := {"name":"test","reason":"test data","host":"127.0.0.1","site":"test","timeStamp":1553084442}
    Telegraf works ok. you can see original
  2. set config drop_original = true and restart telegraf. It works ok at first. You can see data in http://localhost:9126/metrics.

Expected behavior:

Actual behavior:

Telegraf stops consuming kafka message after consuming 1000 kafka messages

Additional info:

No error in log

@danielnelson danielnelson self-assigned this Apr 2, 2019
@danielnelson danielnelson added the bug unexpected problem or unintended behavior label Apr 2, 2019
@danielnelson
Copy link
Contributor

This has been fixed in #5632.

@danielnelson danielnelson added this to the 1.10.2 milestone Apr 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug unexpected problem or unintended behavior
Projects
None yet
Development

No branches or pull requests

2 participants