OK, I need this spelled out one more time. I've read the articles on-line, and I still haven't found a definitive answer.
In SQL Server 2008, I have a "core" table with about 50k records and lots of read activity that is used in the same way in all queries. This data is updated once a month, and read hundreds of times a second.
The data has a clustered index on the fields as they are frequently accessed. Let's say that the clustered index is:
CLUSTERED INDEX
Field1 int
Field2 int
Field3 int
Field4 int
Field5 int
Now, there is not a whole lot more data than that, so it would make sense to just put the extra couple of columns into "Included Columns", but SQL Server doesn't allow included columns on the Clustered Index.
So, we have a second index with essentially the same fields as the Clustered Index, with the other columns as "Included Columns". However, from what I've read, I believe this may be redundant?
COVERING INDEX (non-clustered)
Field1 int
Field2 int
Field3 int
Field4 int
Field5 int
INCLUDED COLUMNS
Field6 varchar(96)
Field7 varchar(96)
Does the non-clustered Index ALREADY have the columns from the clustered index defined in it?
If so, how could this second index be created with NO columns at all (besides what is already in the clustered index)? In other words, I'd like to say "This index is exactly the same as the clustered index... with a couple of Included Columns".
Or, would it be better to just put ALL of the columns into the clustered index (including the two that don't identify the record)? The varchar columns do get updated more frequently (a few times a day instead of once a month), so I would have liked to keep them out of the clustered index, but I think that they are deep enough that they won't affect the index tree enough to cause any rebalancing when a change occurs.
So, is there an efficient way to set up these indexes so that all of the columns of this table are available through the index without going back to the table?
A clustered index does not need includes. Includes means stored extra data at the lowest level of the index tree. This is the data in a clustered index. So you don't need an overlapping index
However if memory footprint is your concern, then you need to shrink the table. With 50k rows I would consider a smallint surrogate key starting at -32768. Then, you remove the overhead of the C key in every NC index. This means you can have a covering index as mentioned in your question.
Note that once your execution plans are cached and the data is in cache, then your queries will come from memory. Your usage means it will stay in cache for some time. Lack of updates means you won't get statistics-driven recompiles.
However, if your data is almost static, then why call SQL Server at all if performance is a concern? Cache it. Remove the network round trip which is probably your biggest overhead based on my caching comments. We outsource some lookups and caching to our clients to reduce server load (we have 50k writes in around 20 seconds at peak load)