I have a logtable with (shortened for example) these columns:
user | time | uniqueid | msg
And there is a Primary Key on
user,time,uniqueid
and a Clustered Index on
user
Now the only three things that are done with this table are:
The selects happen rarely, the inserts happen all the time, and deleting is once per night.
If I understand correctly, the clustered Index will make the select very fast. Because of the amount of data I noticed that deleting will take a very long time, and i guess that this is even worse because of the clustered Index. Is this correct? Also it might as well be making the inserts alot slower (but the numbers are small at a time, so this might not be noticed so easily)
My Idea would be:
What is the best way to optimize this?
Normally I would use an artificial primary key, but for a log table this doesn't really seem to make much sense unless you've omitted a requirement where you deal with individual log records. In that case, it makes more sense to me to maintain your primary key using a non-clusted index with the columns in the order you specify. Change the clustered index to be on time instead of on user. The selects on user can use the index defined by your primary key and, while slowing down some, should still be reasonably fast as they can use an index scan instead of a table scan. Since user is the first column, that should be fairly fast even though the rows are no longer contiguous. Making the clustered index on time will dramatically speed up the deletes and likely reduce fragmentation as well. Since you're still only maintaining two indexes, it shouldn't affect your insert performance. In fact it may improve since the structure of the table will more closely mirror the way the data is inserted.