I have written a python script that tags resources based on the passed servicename such as ec2, rds etc. I have also flag 'all' that loops through 5 services (ec2, rds, iam, cloudwatch, dynamodb). There are around 650 ARNs to tag. I am aware that eacht .tag_resources() call takes not more than 20 arns. This problem I have already solved by using batches with the size of 20.
However, somewhen the return value contains the 'ErrorCode:Throttling' meaning that the AWS endpoint rejects the call.
What do you suggest for solving this issue? I tried it with time.sleep() but there is still the problem.
EDIT: When exluding 'cloudwatch' it works perfeclty fine.
Here is the code of the helper function that performs the get.resource() call:
def __tagHelper(self,arns,n,tags):
batches = chunks(arns,n) # arns is a list containing 650 object, n = 20
failedTagTrys=[]
for batch in batches: # batch is of size 20
response = self.client.tag_resources(
ResourceARNList=batch,
Tags=tags
)
failedTagTrys.append(response['FailedResourcesMap'])
# Clean up the list by removing empty dictionaries
cleaned_FailedTagTrys=list(filter(None, failedTagTrys))
return cleaned_FailedTagTrys
I bypassed the issue by using the AWS Resource Groups API.