Kinect Fusion requires the client code to specify what is effectively a bounding-box and voxel resolution within the box before initialisation. The details of the reconstruction within the box are held in GPU memory, and therefore one runs into limits rather quickly. Certainly for a space the size of, say, a standard residential house at high resolution the amount of GPU memory required is (way too) high.
The Fusion SDK allows one to copy the data to/from GPU memory, and to reset the bounding volume, resolution etc at any time, so in theory one could synthesise a large volume by stitching together a number of small volumes, each of which a normal GPU can handle. It seems to me to be a technique with quite a few subtle and difficult problems associated with it though.
Nevertheless this seems to have been done with kintinuous. But kintinuous does not seem to have any source code (or object code for that matter) publicly available. This is also mentioned in this SO post.
I was wondering if this has been implemented in any form with public source code. I've not been able to find anything on this except the above mentioned kintinuous.
Kintinuous is opensource now since it's first commit on Oct 22, 2015
Here is another blog on the tag kintinuous: https://hackaday.com/tag/kintinuous/