I'm confused when looking at the database connection metrics for my AWS Relation Database Service (RDS) monitor. I see these different metrics reported:
- Database connection sum
Sum of what? Over what time period? - Database connection average
Average of what? Over what time period? - Database connection maximum
Maximum of which metric (sum, average, real)? Over what period? - Database connections real count
I'm guess this is the number of actual connections at the time the poll is taken.
The other odd things is that the DB connections chart dataset labeled "Database connection Sum" actually matches with the raw datatable numbers with the column heading "Avg. DB Connections (count)." See attached image.
Can someone explain these different metrics. I'm missing some alarms because I can seem to figure which metric I should be monitor (and how often).
Hi Steve,
Thank you for raising your query. I will try to give you an idea about each of these metrics.
In general , we poll at 5 minute frequency for RDS (you can choose your frequency if required) . But , from AWS , we expect 3-5 data points for these 5 minutes by default. So , we try to give you the possible meaningful information out of these.
With respect to database connections, there are 4 ways to configure threshold.
Sum
Summation of connections for last 5mins. It gives the total number of connections.
Average
Average connections in use for last 5mins. If there are 3 points to compute this , we divide the overall summation by 3 and provide you an average.
Maximum
Maximum number connections used in 5mins. Say ,there are 3 points available for the 5 minutes on AWS, the gives the maximum value out of those .
Real count
Exact number of active/open connections at polling time. Say, there are 3 points for these 5 minutes from AWS , we pick the 3rd point as the value for this. This gives the value of the database connections at the time when the data is polled for.
Hope this helps. Kindly let us know if you require more details.
Regards,
Ananthkumar K S