Is there any reason why I should stop using SYSDATETIME() everytime, instead of GETDATE() ?
Don't they both ask the cpu what time it is or does sysdatetime need more instructions to calculate the fractions? Does Getdate work on rounding it? Can sysdatetime be faster because it's not working on rounding?
I obviously wouldn't use sysdatetime if I'm not storing the nanoseconds, but I'm asking about the costs other than the storage size. (The current app I'm developing runs sysdatetime() at least 280 times a second)
SELECT SYSDATETIME();
GO
DECLARE @d DATETIME2(7) = SYSDATETIME();
GO 10000
SELECT SYSDATETIME();
GO
DECLARE @d DATETIME = SYSDATETIME();
GO 10000
SELECT SYSDATETIME();
GO
DECLARE @d DATETIME2(7) = GETDATE();
GO 10000
SELECT SYSDATETIME();
GO
DECLARE @d DATETIME = GETDATE();
GO 10000
SELECT SYSDATETIME();
Results:
So it appears to not matter. What matters is what type of variable you assign it to, and even that is not by much. 10000/0.1 seconds means the delta is very, very small and not enough to worry about. I would rather be consistent in this case.