I have the following query:
SELECT
COUNT(*)
FROM
FirstTable ft
INNER JOIN SecondTable st ON ft.STID = st.STID
As you can guess, "STID" is the primary key on "SecondTable"... and "FirstTable" will have a pointer to that second table. Here are the indexes that I have:
FirstTable: NONCLUSTERED INDEX on column "STID"
SecondTable: CLUSTERED PRIMARY KEY
INDEX on "STID"
The query above gives me a subtree cost of 19.90 and takes 2 seconds.
After running the database tuning advisor for that query, they suggested making the very same index that I had on second table... but non-clustered. So I tried it with these results.
FirstTable: NONCLUSTERED INDEX on column "STID"
SecondTable: NONCLUSTERED
INDEX on "STID"
Now, the query above gives me a subtree cost of 10.97 and takes <1 second!
This 100% shatters my brain... Why would a NONCLUSTERED index perform faster than a CLUSTERED index in this scenario?
Because your query does not retrieve any actual records from the tables, it just counts.
With the non-clustered indexes, it just joins two indexes (which are smaller in size than tables) using most probably MERGE JOIN
.
With a clustered index, it has to join the table and the non-clustered index. The table is larger and it takes more time to traverse it.
If you issue a query like this:
SELECT SUM(first_table_field + second_table_field)
FROM FirstTable ft
INNER JOIN
SecondTable st
ON ft.STID = st.STID
which retrieves actual values, you will see the benefits of clustering.