You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running postgres exporter as a container in a kubernetes pod which also hosts the postgresql server at localhost:5432
What did you expect to see?
The /metrics endpoint should have returned the prometheus metrics at all times
What did you see instead? Under which circumstances?
While the /metrics endpoint worked well for about an hour, after some time the metrics server starts timing out i.e. there is no response at :9187/metrics (Postgres Exporter is running at port 9187 of the pod). There are no logs about the failure to serve these requests in the postgres exporter logs
This issue often gets fixed when the postgres server is restarted,but only for sometime.
I can connect to the postgres server through psql at the same time
More Information
The requests at :9187/ and :9187/probe are being served well. When i try probing my postgresql server through the below commands:
# HELP pg_exporter_last_scrape_duration_seconds Duration of the last scrape of metrics from PostgreSQL.
# TYPE pg_exporter_last_scrape_duration_seconds gauge
pg_exporter_last_scrape_duration_seconds{cluster_name="mydb",namespace="default"} 1.002118094
# HELP pg_exporter_last_scrape_error Whether the last scrape of metrics from PostgreSQL resulted in an error (1 for error, 0 for success).
# TYPE pg_exporter_last_scrape_error gauge
pg_exporter_last_scrape_error{cluster_name="mydb",namespace="default"} 1
....
....
....
pg_up{cluster_name="mydb",namespace="default"} 0
Logs emitted in postgres exporter EVERYTIME the above /probe requests are fired to check reach-ability to postgres server
ts=2024-05-23T20:30:23.947Z caller=probe.go:41 level=info msg="no auth_module specified, using default"
ts=2024-05-23T20:30:23.947Z caller=server.go:74 level=info msg="Established new database connection" fingerprint=localhost:5432
ts=2024-05-23T20:30:23.949Z caller=collector.go:194 level=error target=:5432 msg="collector failed" name=bgwriter duration_seconds=0.001488188 err="pq: SSL is not enabled on the server"
ts=2024-05-23T20:30:23.950Z caller=collector.go:194 level=error target=:5432 msg="collector failed" name=replication_slot duration_seconds=0.002488279 err="pq: SSL is not enabled on the server"
ts=2024-05-23T20:30:23.950Z caller=collector.go:194 level=error target=:5432 msg="collector failed" name=database duration_seconds=0.003197173 err="pq: SSL is not enabled on the server"
ts=2024-05-23T20:30:24.949Z caller=postgres_exporter.go:716 level=error err="Error opening connection to database (postgresql://:5432): pq: SSL is not enabled on the server"
What did you do?
Running postgres exporter as a container in a kubernetes pod which also hosts the postgresql server at localhost:5432
What did you expect to see?
The /metrics endpoint should have returned the prometheus metrics at all times
What did you see instead? Under which circumstances?
More Information
The requests at :9187/ and :9187/probe are being served well. When i try probing my postgresql server through the below commands:
curl "<POD IP>:9187/probe?target=127.0.0.1:5432&sslmode=disable"
curl "<POD IP>:9187/probe?target=:5432&sslmode=disable"
curl "<POD IP>:9187/probe?target=/var/run/postgresql:5432&sslmode=disable"
Output
Logs emitted in postgres exporter EVERYTIME the above /probe requests are fired to check reach-ability to postgres server
Environment
Linux/Kubernetes
System information:
Linux 5.15.0-1054-azure x86_64
postgres_exporter version:
The text was updated successfully, but these errors were encountered: