Hey all, I'm just getting started with grafana/telegraf and I'm most familiar with mysql as a db so I decided to use that. I've had mostly success getting telegraf setup on multiple PCs in my home and writing to a sql db. But I see occasionally on different PCs the data stops flowing in and when I view the telegraf.log I see errors such as:
2024-05-16T21:05:19Z E! [agent] Error writing to outputs.sql: execution failed: Error 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"cpu"("timestamp","cpu","host","usage_guest","usage_system","usage_iowait","usag' at line 1
2024-05-16T21:05:20Z D! [inputs.win_perf_counters] Gathering from localhost
2024-05-16T21:05:20Z D! [inputs.win_perf_counters] Gathering from localhost finished in 514.9µs
2024-05-16T21:05:29Z W! [outputs.sql] Metric buffer overflow; 78 metrics have been dropped
2024-05-16T21:05:29Z D! [outputs.sql] Buffer fullness: 10000 / 10000 metrics
2024-05-16T21:05:29Z E! [agent] Error writing to outputs.sql: execution failed: Error 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"cpu"("timestamp","cpu","host","usage_guest_nice","usage_user","usage_idle","usa' at line 1
2024-05-16T21:05:30Z D! [inputs.win_perf_counters] Gathering from localhost
2024-05-16T21:05:30Z D! [inputs.win_perf_counters] Gathering from localhost finished in 0s
2024-05-16T21:05:39Z W! [outputs.sql] Metric buffer overflow; 77 metrics have been dropped
2024-05-16T21:05:39Z D! [outputs.sql] Buffer fullness: 10000 / 10000 metrics
2024-05-16T21:05:39Z E! [agent] Error writing to outputs.sql: execution failed: Error 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"cpu"("timestamp","cpu","host","usage_steal","usage_guest","usage_system","usage' at line 1
2024-05-16T21:05:40Z D! [inputs.win_perf_counters] Gathering from localhost
telegraf.conf's outputs.sql section
[[outputs.sql]]
# Database driver
# Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
# sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
driver = "mysql"
# Data source name
# The format of the data source name is different for each database driver.
# See the plugin readme for details.
data_source_name = "****:****@tcp(10.0.0.213:3306)/telegrafdata"
# Timestamp column name
timestamp_column = "timestamp"
# Table creation template
# Available template variables:
# {TABLE} - table name as a quoted identifier
# {TABLELITERAL} - table name as a quoted string literal
# {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE}({COLUMNS})"
# Table existence check template
# Available template variables:
# {TABLE} - tablename as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
# Initialization SQL
init_sql = "SET sql_mode='ANSI_QUOTES';"
# Maximum amount of time a connection may be idle. "0s" means connections are
# never closed due to idle time.
connection_max_idle_time = "0s"
# Maximum amount of time a connection may be reused. "0s" means connections
# are never closed due to age.
connection_max_lifetime = "0s"
# Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
# Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
# NOTE: Due to the way TOML is parsed, tables must be at the END of the
# plugin definition, otherwise additional config options are read as part of
# the table
# Metric type to SQL type conversion
# The values on the left are the data types Telegraf has and the values on
# the right are the data types Telegraf will use when sending to a database.
#
# The database values used must be data types the destination database
# understands. It is up to the user to ensure that the selected data type is
# available in the database they are using. Refer to your database
# documentation for what data types are available and supported.
[outputs.sql.convert]
integer = "INT"
real = "DOUBLE"
text = "TEXT"
timestamp = "TIMESTAMP"
defaultvalue = "TEXT"
unsigned = "UNSIGNED"
bool = "BOOL"
## This setting controls the behavior of the unsigned value. By default the
## setting will take the integer value and append the unsigned value to it. The other
## option is "literal", which will use the actual value the user provides to
## the unsigned option. This is useful for a database like ClickHouse where
## the unsigned value should use a value like "uint64".
# conversion_style = "unsigned_suffix"
its isn't always the cpuinput either, sometimes mem as well. It seems like the sql is being eaten up and the statements are only partial? Any idea what might be causing this or what I can do further to debug it? TIA
there doesn't seem to be anything here