Archives For vulnerabilities

Is that thing Secure?

November 14, 2014 — Leave a comment

A colleague of mine has just asked me if WebView, the control that is shipped as part of the Windows 8.1 SDK, is Secure. His customer has expressed a doubt about it, probably due to serious issues with a similar component built on older technology (see: Microsoft Security Bulletin MS06-057 – Critical).

The interesting fact, here, is not about the specific issue: it is about the concept of Security. That is, a control like WebView builds upon a browser, Internet Explorer, to allow integrating web navigation within an application: this means that the application that uses the control inherits all the faults and issues in Internet Explorer, plus those in the control itself. On the other hand, this is part of Products that are maintained over time by a Corporation that is very serious when Security is concerned (see: Life in The Digital Crosshairs), a control that is used by many developers on many applications, therefore it will necessarily be more secure than anything the average Joe can cook on his own.

So, is that thing Secure? I hate to say so, but… it depends. It depends on what are you trying to accomplish, on the characteristics of data you are working on, depends on the abilities of your Team and on your budget and on many other factors.

The sad truth is that Security is a rogue concept: it does not allow absolutes and it wears down quickly. In other words, you have to stick with “Secure enough” and continuously invest to fight against bugs to maintain the status of your Application’s Security at an acceptable level.

“I made it! A wonderful Project finished on time and on budget! And I even did that by the book, closely following the dictates of SDL and asking for validation from a panel composed by the most known Security Experts. It will be unbreakable!”

Who wouldn’t like to say those words? Well, I would for sure.

But wait, they have a dangerous seed in them: the feeling of un-breakability. It could be true, for a while, but it definitely is less and less so with the passing of time. This is an hateful truth, disliked by most stakeholders, but a truth anyhow.

If you follow every best practice strictly, you can minimize the quantity and hopefully the severity of vulnerabilities in your solution, but you will not be able to eradicate every one of them: in fact, even if you perform a 100% perfect job, removing all the vulnerabilities in your code – and this is something as close to impossibility as it gets – you will have to rely on third party components or systems, like the O.S. you are using to run your solution, and they will have vulnerabilities as well.

So, you can hope to deter most attackers, but not those skilled and highly funded hackers that are so much in the news nowadays. Your attention to security will delay them from accessing your data, though, hopefully for long.

So, what could you do with that knowledge? You could decide to accept that someone will pass your protections, you could decide that your data is not worth all the effort and bow to the mighty of the attackers giving up any defense (and knowing that they will publicly laugh at you and thank you for your shortsightedness) or you could accept that you are human and plan for the violations. Even if it could not be so apparent, all those three options are equally valuable and acceptable, depending on the circumstances, because they are thoughtful answers to an important question. Personally, I do not consider a wise position only the one that is undertaken without any musing, by instinct or by simple inaction.

By the way, it’s only natural that over time those attackers will find some crack in your most perfect protection, no matter what you do: they have all the resources in the world to do that, while you had only a limited amount to be shared between the many tasks needed to complete your Project. For the sake of discussion, let’s consider the third option, the one where a plan was concocted for handling failures and attacks. It would contain a description of how to detect an attack, how to answer to it in the urgency of the moment, and finally what to do to ensure that the same or similar vulnerabilities will not leveraged upon to violate your solution again. Planning for failure is also important when the Project is in the Design phase: this is because you would want to design your solution to be flexible enough, for example by applying concepts like Cryptographic Agility, which would allow changing your algorithms and keys periodically and with little effort.

You will also want to re-assess your application periodically, in light of the most recent types of attacks in the news. At some point the application will reach its natural end of life: security-wise, it would be when the risk of falling under an attack is too high. As said initially, it’s only normal that the security of the applications wears out over time, as an effect of the combined attacks – more attacks are performed, successful or not, more the application is known by the attackers and greater is the probability for someone to find an important vulnerability. Again, the wise thing is to plan for that: not planning for replacement and retirement of an application is like accepting that your solution will be taken over in due time. Right now, there are many organizations that are considering or performing the migration of hundreds of their old applications from O.S. out of support or about to go out of support, like Windows XP and Windows 2003 Server. Obsolescence hits also Line-of-Business Applications and should be taken under account: re-hosting them could be not the best thing to do, security-wise. In fact, it is not important if those applications are exposed to Internet or not, nor is the sensibility of the data it handles, because even intranet-only applications could be used as an intermediate step toward richer bounties.

So, when a Project is really finished? Well, when it is retired!