Search code examples
windowswinapiapilegacy

Virtualization of Legacy API and co-existence with more modern API?


I don't mean for this question to be a flame bait but I'll be using Microsoft and their win32 API as a example of a legacy API.

Now what I am wondering here is Microsoft is spending a lots of their money and energy in maintaining their legacy API, including all of the "glitches/bugs/workaround" that are needed to keep the API functioning the same. Now I'm aware that in Windows 7 they are providing a way for the customer to run their application in a "Windows XP" VM which would be one such way for them to start cleaning up their win32 API because they could then push all of the application into the "Windows XP" VM.

So now what I am wondering is, is it possible to virtualization a legacy API in such way that an customer/program can still access and use it, yet at the same time be able to take advantage of the newer version/API? Because as far as I understand it, if the application is ran in the "Windows XP" VM, it won't be able to access any of the newer API/feature of Windows 7.


Solution

  • The thing that puzzles me about this question when it comes up is that Windows has been doing this since NT came out in the mid nineties. This is how NT runs DOS and Win16 programs, and how it always has. The NTVDM virtualization layer runs 16-bit apps under Win32 with very little special support from the core OS. This is just one example - another is WINE, which as I understand it does a pretty reasonabl job of running windows apps on top of an API set which is very different from that of windows. So it is definitely possible.

    The more pertinent question would be why Microsoft would consider it. In order for you to think it is necessary you have to think two things. 1) There is something better to replace the win32 API with and 2) Maintaining the Win32 API is a burden.

    Both of these are questionable. In the case of kernel duties, such as accessing hardware and synchronizing and doing threads and processes and memory the Win32 API does a pretty good job, and is ultimately quite close to what the kernel really does. If you think there is a better API then that must mean there is also a better kernel. I personally don't think that NT needs replacing right now. For graphics and windowing, admitedly gdi32 is a bit long in the tooth. But Microsoft solved that problem by building WPF right alongside it. This then brings in the burden question. Well, sure there are two APIs to maintain, but if you virtualized GDI on top of WPF you'd still have to maintain both anyway so there is no benefit there. The advantage of running both in parallel is that GDI already exists and is already tested. All you have to do is to fix the occasional bug, whereas a new virtualization layer would have to be written and tested all over again, which takes time away from making WPF better.

    In terms of maintaining back compat, that isn't as much of a burden as it sounds. It is mainly a test question - you have to test that the API behaviour doesn't change, but again - those tests have already been written, so it isn't really any extra work.

    So, to answer a question with a question, why would they bother?