I in the process of designing a database for high volume data and I was wondering what datatype to use for the primary keys?
There will be table partitioning and the database will ultimatley be clustered and will be hot failover to alternative datacentres.
EDIT
Tables - think chat system for multiple time periods and multiple things to chat about with multiple users chatting about the time period and thing.
Exponential issues are what I am thinking about - ie something could generate billions of rows in small time period. ie before we could change the database or DBA doing DBA things
Mark - I share your concearn of GUID - I dont like coding with GUIDs flying about.
With just the little bit of info you've provided, I would recommend using a BigInt, which would take you up to 9,223,372,036,854,775,807, a number you're not likely to ever exceed. (Don't start with an INT and think you can easily change it to a BigInt when you exceed 2 billion rows. Its possible (I've done it), but can take an extremely long time, and involve significant system disruption.)
Kimberly Tripp has an Excellent series of blog articles (GUIDs as PRIMARY KEYs and/or the clustering key and The Clustered Index Debate Continues) on the issue of creating clustered indexes, and choosing the primary key (related issues, but not always exactly the same). Her recommendation is that a clustered index/primary key should be:
If you use a BigInt as an increasing identity as your key and your clustered index, that should satisfy all four of these requirements.
Edit: Kimberly's article I mentioned above (GUIDs as PRIMARY KEYs and/or the clustering key) talks about why a (client generated) GUID is a bad choice for a clustering key:
But, a GUID that is not sequential - like one that has it's values generated in the client (using .NET) OR generated by the newid() function (in SQL Server) can be a horribly bad choice - primarily because of the fragmentation that it creates in the base table but also because of its size. It's unnecessarily wide (it's 4 times wider than an int-based identity - which can give you 2 billion (really, 4 billion) unique rows). And, if you need more than 2 billion you can always go with a bigint (8-byte int) and get 263-1 rows.
SQL has a function called NEWSEQUENTIALID() that allows you to generate sequential GUIDs that avoid the fragmentation issue, but they still have the problem of being unnecessarily wide.