This repository has been archived by the owner on Aug 22, 2024. It is now read-only.
Replies: 1 comment 1 reply
-
Hi,
No, didn't. Overhead depends on number of databases/tables/indexes/statements within the instance. If there are too many objects and scrape time become too long, it is worth to review the config and maybe disable collecting metrics from secondary databases.
pgSCV polls stats only when configured to send metrics to Weaponry. In basic scenarios it is scraped by Prometheus accordingly to its scrape_config (see scrape_interval). |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi.
1.Didn't you count the overheads when using pgscv with default settings?
2. Which is default frequency update? On graphics it looks like about 15 seconds. Interval cannot be changes?
Beta Was this translation helpful? Give feedback.
All reactions