I have studied a lot of algorithms and it appears that, for an algorithm to reach a peak complexity for an operation, they have to sacrifice the other complexity. I want to understand what is preventing them from being inversely proportional.
To exaggerate: economics says that they are inversely proportional, but mathematics says that they are not. If you plot the time and space complexity of a large number of algorithms, all of the competitive algorithms will lie on a Pareto curve trading time against space, because an algorithm that takes both more time and more space than its competition is not competitive. There are even a small number of algorithms called time-memory tradeoffs. But in the very long term the two are theoretically linked. It takes time at least N to use N space, and if you only have N bits of space for a finite state machine it has only 2^N possible states, so it can't possibly use more than 2^N time without going into an infinite loop.
There is a very theoretical approach in "A machine-independent theory of the complexity of recursive functions" by Blum (JACM 1967). Theorem 3 provides a proof of a link between any two complexity measures, which I think is however so loose as to be completely useless for any practical purpose. It does say, "This means that Phi(n) and Phi'(n) do not differ too much from each other" (where I have translated the symbols in the table as Phi and Phi') - but I think that to agree with "do not differ too much" you have to accept that n and 2^n are pretty much the same thing.