the ASP.NET_SessionState table grows all the time, already at 18GB, not a sign of ever deleting expired sessions.
we have tried to execute DynamoDBSessionStateStore.DeleteExpiredSessions, but it seems to have no effect.
our system is running fine, sessions are created and end-users are not aware of the issue. however, it doesn't make sense the table keeps growing all the time... we have triple checked permissions/security, everything seems to be in order. we use SDK version 3.1.0. what else remains to be checked?
With your table being over 18 GB, which is quite large (in this context), it does not surprise me that this isn't working after looking at the code for the DeleteExpiredSessions method on GitHub.
Here is the code:
public static void DeleteExpiredSessions(IAmazonDynamoDB dbClient, string tableName) { LogInfo("DeleteExpiredSessions"); Table table = Table.LoadTable(dbClient, tableName, DynamoDBEntryConversion.V1); ScanFilter filter = new ScanFilter(); filter.AddCondition(ATTRIBUTE_EXPIRES, ScanOperator.LessThan, DateTime.Now); ScanOperationConfig config = new ScanOperationConfig(); config.AttributesToGet = new List<string> { ATTRIBUTE_SESSION_ID }; config.Select = SelectValues.SpecificAttributes; config.Filter = filter; DocumentBatchWrite batchWrite = table.CreateBatchWrite(); Search search = table.Scan(config); do { List<Document> page = search.GetNextSet(); foreach (var document in page) { batchWrite.AddItemToDelete(document); } } while (!search.IsDone); batchWrite.Execute(); }
The above algorithm is executed in two parts. First it performs a Search
(table scan) using a filter is used to identify all expired records. These are then added to a DocumentBatchWrite
request that is executed as the second step.
Since your table is so large the table scan step will take a very, very long time to complete before a single record is deleted. Basically, the above algorithm is useful for lazy garbage collection on small tables, but does not scale well for large tables.
The best I can tell is that the execution of this is never actually getting past the table scan and you may be consuming all of the read throughput of your table.
A possible solution for you would be to run a slightly modified version of the above method on your own. You would want to call the the DocumentBatchWrite
inside of the do-while loop so that records will start to be deleted before the table scan is concluded.
That would look like:
public static void DeleteExpiredSessions(IAmazonDynamoDB dbClient, string tableName) { LogInfo("DeleteExpiredSessions"); Table table = Table.LoadTable(dbClient, tableName, DynamoDBEntryConversion.V1); ScanFilter filter = new ScanFilter(); filter.AddCondition(ATTRIBUTE_EXPIRES, ScanOperator.LessThan, DateTime.Now); ScanOperationConfig config = new ScanOperationConfig(); config.AttributesToGet = new List<string> { ATTRIBUTE_SESSION_ID }; config.Select = SelectValues.SpecificAttributes; config.Filter = filter; Search search = table.Scan(config); do { // Perform a batch delete for each page returned DocumentBatchWrite batchWrite = table.CreateBatchWrite(); List<Document> page = search.GetNextSet(); foreach (var document in page) { batchWrite.AddItemToDelete(document); } batchWrite.Execute(); } while (!search.IsDone); }
Note: I have not tested the above code, but it is just a simple modification to the open source code so it should work correctly, but would need to be tested to ensure the pagination works correctly on a table whose records are being deleted as it is being scanned.