I've seen many implementations using below to find mid point of two indices:
int mid = lo + (hi - lo) / 2;
instead of
int mid = (lo + hi) / 2;
Mathematically, I see no difference and yet, I've never seen anyone using the below one. Is there a difference between the two computationally?
There exist a maximum positive value for a 32-bit signed binary integer in computing.
We assume this value is 100.
int lo = 60;
int hi = 80;
then lo + hi = 60 + 80 = 140
> 100
, it is dangerous to do so because it will cause a integer overflow error.