Quotas and limits in Amazon Redshift defines that the limit for "Tables for large cluster node type is 9,900".
That limit has been reached in a cluster.
SQL ERROR [500310] [XX000]: [Amazon](500310) Invalid operation: 1040
Details:
Maximum tables limit exceeded. The maximum number of tables per cluster is 9900 for this instance type. The limit includes permanent and temporary tables. (pid: xxxxx)
In order to know if we are to hit this issue again in a cluster, DevOps wants to setup an alert, but we are failing to find a simple query to identify all tables that count towards that limit.
The closest we've got is with this query to svv_diskusage:
SELECT COUNT(DISTINCT d.tbl) FROM svv_diskusage d;
That returned a total of 8,825
. We are 1,075
away.
Temporary tables also count, but we were not even close to be over 1,000 temp tables at the time the exception was thrown.
Any other idea out there that does not include connecting to each database of the cluster to count tables?
The SVV_DISKUSAGE
view, as documented, only counts permanent tables; it does not count temporary tables that are created within a transaction. STV_TBL_TRANS keeps track of the transient tables created during a transaction.
Alternatively, count(distinct table)
from STV_BLOCKLIST.
However, do you really want to be constantly executing queries to track the number of tables in your cluster? Especially given that transient tables will depend on the number of active transactions, it seems like the wrong approach.
Instead, I'd look at the TotalTableCount
CloudWatch metric. It would also be easier to create an alarm on this. With the caveat that you'll probably want to alarm on, say, 95% because the metric probably has low resolution.