I saw the following in a Computer Science text book,
... so, I was wondering if someone could explain to me why.
One of the most common ways that logarithms arise in computer science is by repeatedly dividing some array in half, which often occurs with divide-and-conquer algorithms like binary search, mergesort, etc. In those cases, the number of times you can divide an array of length n in half before you get down to single-element arrays is log2 n.
Another very common way logarithms arise is by looking at the bits of a number. Writing out a number in binary uses roughly log2 n bits for the number n. Algorithms like radix sort that sometimes work one bit at a time often give rise to logs like these. Other algorithms like the binary GCD algorithm work by dividing out powers of two and therefore end up with log factors floating around.
Logarithms in physics, math, and other sciences often arise because you're working with continuous processes that grow as a function of time. The natural logarithm comes up in those contexts because the "natural" growth rate of some process over time is modeled by ex (for some definition of "natural" growth rate). But in computer science, exponential growth usually occurs as a consequence of discrete processes like the divide-and-conquer algorithms described above or in manipulation of binary values. Consequently, we typically use log2 n as a logarithmic function, since it just arises so frequently.
This isn't to say that we always use base-two logarithms in CS. For example, the analysis of AVL trees often involves logarithms whose base is the golden ratio φ due to the presence of Fibonacci numbers. Many randomized algorithms do involve e in some way, such as the standard analysis of quicksort, which involves harmonic numbers and thus connects back to natural logarithms. Those are examples of processes where the growth rate is modeled by something else - Fibonacci numbers or the exponential function - and so we opt for different log bases there. It's just that it's sufficiently common to work with binary numbers or to divide things in half that base-two logarithms end up being the default.
In many cases, it doesn't even matter what base you choose. For example, in big-O notation, all logarithms are asymptotically equivalent (they only differ by a multiplicative constant factor), so we usually don't even specify the base when writing something like O(n log n) or O(log n).