I have been stuck on this problem for hours now and would like to know if anyone has faced any similar issues with their global ratelimiting.
We are using System.Threading.RateLimiting for our app on multiple endpoints. This is how it is set up in Setup.cs:
services.AddRateLimiter(options =\>
{
options.GlobalLimiter = PartitionedRateLimiter.Create\<HttpContext, string\>
(httpContext =\>
{
if (RateLimiterPolicyHelpers.IsBasicAuth(httpContext))
{
return RateLimitPartition.GetTokenBucketLimiter(Constants.RateLimitPolicyNames.BasicAuthLimiter,
_ =>
new(){
AutoReplenishment = true,
TokenLimit = 40, // Double the token limit to allow for bursts
ReplenishmentPeriod = TimeSpan.FromSeconds(1),
TokensPerPeriod = 20
};);
}
return RateLimitPartition.GetNoLimiter(Constants.RateLimitPolicyNames.BasicAuthLimiter);
});
options.OnRejected = (context, token) =\>
{
context.HttpContext.Response.StatusCode = 429;
Log.Warning("Request rejected by BasicAuthRateLimitPolicy");
return ValueTask.CompletedTask;
};
options.AddPolicy\<string, EndpointRateLimiterPolicy\>(
Constants.RateLimitPolicyNames.Endpoint
);
options.AddPolicy\<...\> //we have multiple policies, all working fine
});
When testing locally the globallimiter works as expected. But in our kubernetes-cluster it suddenly refuses to work. I've tested with logging and I can see that requests end up inside of the if-clause but they don't get rejected, it just goes on as nothing has happened. Anyone has any ideas of what could be wrong or what else I can try?
I've tried setting the token-limit and tokensPerPeriod to 1, setting autoReplenishment to false, removing the other policies, using different types of limiters (sliding window etc). I cannot recreate it locally, there it always works.
I figured it out after way too long. The NoLimiter that gets created if it fails the if-clause should have a different name than the one in the if-clause. We had prometheus scraping metrics and because of that it always created the NoLimiter! Changing it to a unique name fixed the problem.