I'm wondering if there's an upper limit to the number of commits that a git repository can handle.
In a solo project I'm working on right now, I've been coding locally, committing/pushing changes in git, then pulling the changes on my development server.
I treat this as an easier alternative to working locally and uploading changes via FTP... Fortunately/Unfortunately it's such an easy workflow that I sometimes go through many edit/commit/push/pull/browser-refresh cycles while coding.
I'm wondering if this is going to turn around and bite me somewhere down the line. If it's likely to be a problem, I'm wondering how I can avoid that trouble ... It seems like a rebase might be the way to go, especially since I won't have to worry about conflicting branches etc.
Well the "upper limit" would likely be the point at which a SHA1 collision occurs, but since the SHAs are 40 hexadecimal digits long (16^40 ~ 1.4x10^48 possibilities), it's so close to zero possibility that it's not even funny. So there's roughly a zero percent chance you'll have any problems for at least the next several millennia.
Hyperbolic Example (just for fun): 1 commit/minute (just changing one file -> three new SHAs used (file, commit, tree) = 3 new shas used / minute = ... = 1.6M shas used / year = 1.6 Billion shahs / millennia = 1x10^-37 % used each millenia... (at 1000 files/commmit/min, it's still 3.6x10^-35%)
That being said, if you want to clean up your history, squashing them down with rebase is probably your best bet. Just make sure you understand the implications if you've shared the repo publicly at all.
You might also want to garbage collect after rebasing to free up some space (make sure the rebase worked right first, though, and you might need to tell it to collect everything or it will, by default, not collect anything newer than two-weeks old).