From what I understood in Wikipedia's explanation of quicksort's space complexity, quicksort's space complexity comes from its recursive nature. I'm curious as to whether it's possible to implement quicksort non-recursively and, in doing so, implement it with constant space complexity.
It's entirely possible to implement it non-recursively, but you do that by implementing a stack separate from the normal function call/return stack. It may save some space by only storing the essential information instead of a lot of (mostly identical) function return addresses, but its size is still going to be logarithmic, not constant.
Since there's been discussion about whether (for example) the algorithm cited by @Gordon Linoff in his answer is really a Quicksort, I'll refer to C.A.R. Hoare's paper describing Quicksort, which seems to me the most authoritative source available about what does or does not constitute a Quicksort. According to his paper:
Meanwhile, the addresses of the first and last items of the postponed segment must be stored.
While it's not too much of a stretch to store something that's (more or less) equivalent to an address (e.g., an index) rather than an actual index, it seems to me that when the description of the algorithm directly states that you must store an address, an algorithm that does not store an address or anything even roughly equivalent to it, is no longer an implementation of that same algorithm.