I mean when index size increases dramatically, e.g. from 1GB to 1TB, how will the computational cost to open an IndexReader over that index increase? Is that a linear trend?
The trend is linear but the constant in front of that should be small. Also, the cost is mostly IO not CPU.
IndexReader loads certain data-structures up front, like the terms index, deleted documents bit vector, norms/FieldCache/doc values (on the first query that needs them). Except for field cache, loading these structures is mostly IO (not CPU) cost, and the cost should be quite low constant factor per document.
The heavy/big stuff (postings, stored fields, term vectors) are all left on disk.