If there isn't, how feasible would it be to write one? A filesystem which for each directory keeps the size of its contents recursively and which is kept updated not by re-calculating the size on each change on the filesystem, but for example update the dir size when a file is removed or grows.
From the filesystem point of view, size of directory is size of information about its existence, which needs to be saved on the medium physically. Note, that "size" of directory containing files which have 10GB in total, will be actually the same as "size" of empty directory, because information needed to mark its existence will take same storage space. That's why size of files ( sockets, links and other stuff inside ), isn't actually the same as "directory size". Subdirectories can be mounted from various locations, including remote, and recursively mounted. Somewhat directory size is just a human vision, for real files are not "inside" directories physically - a directory is just a mark of container, exactly the same way as special file ( e.g. device file ) is marked a special file. Recounting and updating total directory size depends more on NUMBER of items in it, than sum of their sizes, and modern filesystem can keep hundreds of thousands of files ( if not more ) "in" one directory, even without subdirs, so counting their sizes could be quite heavy task, in comparison with possible profit from having this information. In short, when you execute e.g. "du" ( disk usage ) command, or when you count directory size in windows, actually doing it someway by the kernel with filesystem driver won't be faster - counting is counting.
There are quota systems, which keep and update information about total size of files owned by particular user or groups, they're, however, limited to monitor partitions separately, as for particular partition quota may be enabled or not. Moreover, quota usage gets updated, as you said, when file grows or is removed, and that's why information may be inaccurate - for this reason quota is rebuild from time to time, e.g. with cron job, by scanning all files in all directories "from the scratch", on the partition on which it is enabled.
Also note, that bottleneck of IO operations speed ( including reading information about the files ) is usually speed of the medium itself, then communication bus, and then CPU, while you're considering every filesystem to be fast as RAM FS. RAM FS is probably most trivial files system, virtually kept in RAM, which makes IO operations go very fast. You can build it at module and try to add functionality you've described, you will learn many interesting things :)
FUSE stands for "file system in user space", FS implemented with fuse are usually quite slow. They make sense when functionality in particular case is more important than speed, e.g. you can create a pseudo-filesystem basing on temperature reading from your newly bought e-thermometer you connected to your computer via USB, however they're not speed daemons, you know :)