I distinctly remember, and I don’t doubt you do too, this story about Jim Allchin declaring to the Press that Windows Vista was secure enough for it to be used without an anti-virus installed. While this comment was seen by some as an arguably small communication mistake, judging by how the Press and forum users commented on this declaration, it certainly is far from being completely false , and on multiple aspects.

The Microsoft Operating Systems, to only speak of these ones, as the virus problem is not as significant in other OSes due to several factors such as their traditional designs or market-share, have implemented a set of technologies in the last years that make it possible to increase the security of the OS to a point where most viruses will be rendered ineffective. The most obvious one, and most effective one is rights reduction, or not having people only performing common “user-level” tasks not running as Administrator. That alone would have rendered our Vundo variant completely ineffective, as this malware tries to write files to system directories and create entries to areas of the registry that are restricted to Administrators only, which members of the Users group do not have write-access to. The test is simple: put an XP system (not even Vista) with an user running with user-grade rights, but without anti-virus protection, and the same XP system with a common anti-virus product but in the hands of an user running with Administrative privileges, both in the same situation they encountered the Vundo variant: the anti-virus-protected system is definitely going to be infected just the way it happened the first time, while the non-admin running XP system will remain clean.

It seems clear that the method to prevent the infection was different: one was trying to detect the threat to stop it, while the other was about making it ineffective, without the need to detect it. Admittedly, the reduced-rights approach could only be effectively implemented at the OS-level, like many of the other protection technologies, such as DEP (Data Execution Prevention), PatchGuard, a technology preventing on-the-fly modification of the Windows Kernel, which some anti-virus companies even fear could render some of their own protection mechanisms ineffective. Some of these protection/risks mitigation technologies also work directly at the compiler level, such as the /GS switch of the Microsoft compilers, aiming to protect the stack for potential buffer overruns that overwrite the return address the program should jump to after having called a function (a classic vulnerability, if not the most classic one).

The whole Microsoft Windows code and the other Microsoft products are also tested by various internally-developed security-oriented code review software and fuzzers that aim to detect security bugs in the code (such as the buffer overflow that I was talking about earlier). As these tools and fuzzers are constantly updated by the Microsoft security engineers, and constantly tweaked to detect flaws it failed to identify in the past, it is a logical expectation that the code will get more secure as time goes by, as vulnerabilities are discovered and these tools get improved. Granted, this kind of tools are more a process integrated to the development cycle than a security technology integrated in the OS per-se, but had these tools been used on the Windows source code for Windows 2000 and for the early versions of Windows XP, disasters like Blaster or Sasser would never have existed, since the security bug they exploited would have been detected right away, before the product would even have shipped. Even though it is not a technology per-se, the best protection of them all is the one that is pro-active and close as many doors as possible for unsolicited code to get in the machine in the first place: even if it can not be “seen” by the end-user as a process, service, or an option, Microsoft’s focus on security which led to the establishment of its own security guidelines: the SDL (Security Development Lifecycle), is making the OS safer, especially regarding threats requiring no user intervention or software installation like the handling of malformed files or network communications, especially if those run as system services or elevated-rights processes, which was the case of both Blaster and Sasser. Obviously, these mechanisms and the improved coding unfortunately will not eliminate all the security bugs and won’t do anything for an user deciding to run AnnaKornikovaNude.exe on his computer, but this kind of risks can be mitigated by dropping the users of their admin-rights, as we have seen earlier.

Microsoft could go even one step further with these security technologies: a feature that I would love to see in a corporate environment as an IT professional, but much less as a geek and computer enthusiast liking to fiddle with binaries: I am talking about Digital Signatures enforcement check.
Most executables and other binaries like DLLs now come in signed form. Most Microsoft’s recent binaries are indeed signed using Authenticode and so are the modules of many other third-party companies. This works by purchasing a certificate from a certificate authority such as Verisign to securely sign the modules, which allows to securely identify both their source/provider but also that they have not been corrupted or modified with more or less malicious intends. If it were possible to allow the execution of modules from only chosen software providers, ensuring that only code from trusted sources runs on a machine would be rather easy. While it wouldn’t prevent all the kind of infections or intrusion (especially buffer-overrun and other in-memory process modification of methods) and would only be effective when a module is loaded and thus checked for its signature, it would mitigate against a great deal of attacks known nowadays, and cause the unauthorized installation of software and malware much more difficult than it is today. It could also be a good way for software companies to slow down piracy, as piracy often relies on binary patching, which would cause the digital signature to become invalid, which would in turn prevent the pirated product to run. To make it work, this certificate check enforcement would have to be disabled, which would cause the security-level of the system to dramatically decrease, a decision that most security-concerned system administrators would be reluctant to make.

While Operating Systems will probably never be fully secure, like any other kind of software, I believe that anti-virus products are failing to innovate and work on new methods to deal with the nowadays’ dangers. The anti-virus technology hardly changed since the early 90s, while the malware world itself changed almost completely. I am afraid for them that their failure to adapt themselves will cause their demise: even the harsh competition between the different anti-virus solutions vendors wasn’t enough to create any technological evolution in this area, and the real innovation is now coming from the OS, the weak parent they were supposed to protect. At any rate, anti-virus licenses cost a lot of money, especially when you have a high number of workstations and servers to protect, and it turns out that people do not want to buy products they find useless, especially in some companies where the IT budget is tight or where the money has better projects on which to be spent. I think it will be particularly true in the corporate market as it is much easier to enforce Operating System security on corporate users (like preventing them from installing software and removing their local admin-rights) than in the consumer market where the user still has to be the master of his own machine, to use it properly (to install additional software, for example), which is why, even in Vista, home users still are admins by default, albeit protected by an UAC prompt, which is, in my opinion, still not as good than a real non-admin user, but probably the best way Microsoft had found to slowly change the mentalities and habits of its users. But even regarding home users, I wouldn’t play too much with people’s trust as an anti-virus letting a malware get through is simply failing its purpose in the eyes of people, and they may not renew their subscription anymore if they experienced that it made no difference at all and that the only solution for them was to bring their machine to a repair shop… or to their computer-literate friend.