Archives For October 2014

Many customers I have seen in the past are affected by a conflict between Developers and some IT Department.

In most situations, this was caused by the need for the IT Department to impose restrictions in what the Developers could do, to simplify their management activities and limit the consequences of improper behavior of the Applications on what is already hosted in the same environment. Developers hardly participate in defining those restrictions: they mainly are subject to them; sometimes this lead to big issues because those restrictions give into the way of the Developers’ job without being really understood by them.

The net result of this situation is that Developers try to get control of their own Applications as much as possible, against any will of the IT Department. It’s all too common in these scenarios for Developers to try and trick the IT Department, for example to gain direct control to the Application logs. What is more important than to act rapidly when a critical business process fails badly? When this happens, Developers simply cannot afford to ask the IT Department for each single piece of information needed to discover what happens, and therefore if the IT Department does not allow direct access to data and to the systems, they consider opening doors to get to that data nonetheless. This is only an example of how Developers try to find a way to achieve their goal by exploiting the holes in the IT Department guard, without thinking to the consequences. In a sense, this is like hacking your own system adding back doors.  Not a very smart thing to do, isn’t it? Particularly if you consider that the bad people could benefit of those back doors as well.

At the end of the day, each side has its own reasons and considers them more relevant than the other’s, but both do a bad service to their Company when they do not team up since the very start of the project, to define how the Application should behave to be a good citizen of the hosting environment, and at the same time to be able to satisfy all its requirements. It is way better to talk openly from the start, instead of starting (or continuing) a cold war that ultimately benefits only who attacks your systems and applications.

The difference between Compliancy and Security could be less clear than one would expect. This is very understandable, because some Compliancy Certifications are all about Security. Let’s consider for example the Payment Card Industry Data Security Standard (PCI/DSS): this is an industry standard defined by a Consortium lead by the most important Credit Card issuers, born to ensure Security of Credit Card data by defining compliancy requirements binding each part involved in the management this data, during an after the processes required to perform Credit Card transactions.

The need is clear: what is more clearly a target for malicious people than money? So, it is all too natural that the Companies issuing Credit Cards require an high level of security from anyone that is supposed to handle credit card data. This has been the origin of PCI/DSS as a Security Standard and as a Compliancy requirement.

So, being compliant with PCI/DSS would mean to be Secure, wouldn’t it? Well, unfortunately not.

The fact is that Compliancy is a first step: it allows avoiding some stupid mistakes by leveraging the experience gained over time by other people in the field, but it is not a guarantee. In fact, attackers are not limited to the scope of what the standards dictate: they can search for additional mistakes and vulnerabilities. Let’s see some examples of that.

Target is an important retailer in the U.S.: they have seen their POS (points-of-sale) compromised by a group of attackers from Russia. The weakness, for Target, has been about giving too much access to a supplier (see Fridge vendor pegged as likely source of Target breach): the attackers violated the latter first and they gained full access to Target POSs as a result. The final outcome has been that a huge number of credit card numbers have been stolen and the credibility of the chain has collapsed so badly that the CEO had to resign (see: Target CEO Gregg Steinhafel Resigns In Data Breach Fallout).

More recently, a restaurant chain in the U.S., Jimmy John’s Sandwich Shop, detected an attack to their POS too, based on vulnerabilities found in their terminals, provided by Signature Systems (see: Signature Systems Breach Expands).

Finally, only a few days ago Staples has confirmed an attack to some of their POS, with a number of credit card numbers stolen (see: Staples is investigating a potential issue involving credit card data).

Surely enough, all the organizations above would have thought to be safe because of the good security practices they have in effect and because they are Compliant. This self assuredness have ultimately been to no avail, though: proof is that they have fallen to persistent attackers.

This is a very common situation, so much that attackers are focusing their attention in trying to violate specifically retail web sites: a recent study from Imperva’s Application Defense Center group on a set of 99 applications protected by their Web Application Firewalls, has shown that 48% of the attacks from August 2013 to April 2014 have targeted retail websites, while in the same timeframe 10% of the attacks have targeted financial institutions (see: Retail applications hit hardest, Web Application Attack Report indicates).

So, all those regulations should help avoiding incurring in those threats, but the sad truth is that they can do only so much. In fact, on one hand they cannot be updated very frequently, because large organizations tend to be able to embrace change only at a slow pace; on another hand, security depends also from the specifics of the given solution: in other words, a regulation imposed by a third party need to be applied to many contexts and therefore it tends to cover as much as possible, but not everything, leaving out what is less common.

Speaking of which, I remember a customer I worked for some years ago. His company routinely engaged a Penetration Testing company to check on their public-facing applications: in doing so, they were Compliant with an internal regulation. “All greens!”, he proudly said to me was the latest result. Well, after a brief discussion that lasted no more than half an hour, I did discover an important vulnerability in the design of their solution.

The moral is that to be really safe it is better to consider Compliancy as a starting point, not as a goal, and to design and implement Security assuming that violations are a fact of life: we should simply work toward giving attackers the hardest time and to limit the (bad) effects of successful violations as much as possible.

George Orwell wrote 1984 as a SF book disguising a strong criticism to the tendency of the old Warsaw Pact Countries to spy on their own people. He would have not foreseen that it would have been a pale description of what happens today.

It seems that there are many groups out there, spying on people selected in a very interesting way: it could be very difficult to demonstrate the allegiance of those hackers to specific Governments, but the suspect is strong. For example, very recently a malware targeting Hong Kong protesters has been discovered (see: Malware program targets Hong Kong protesters who use Apple devices).

It is perhaps even more worrying that Colleges and Schools have started spying on their students, in the U.S. (see: The author of the study highlights how this behavior could lead students to be accustomed to being spied upon.

And it is a known fact that some Governmental Organizations (read: the NSA) spy on and infiltrate foreign Countries, even friendly ones (see: Core Secrets: NSA Saboteurs in China and Germany) and foreign Companies, especially in the telecommunications sector. Their goal is both to collect information and to undermine the ability to protect conversations, by weakening the encryption systems used by them.

The last chapter of this history has been written by iSight Partners, which discovered a vulnerability in Windows – patched yesterday – that has been seen to be used by a Team of Hackers from Russia to attack NATO, the Ukrainian Government, some strategic targets in Europe and an U.S. academic organization (see: iSIGHT discovers zero-day vulnerability CVE-2014-4114 used in Russian cyber-espionage campaign). As in other cases, it is very difficult to identify the sender for those attacks, but the targets and the source of the attacks are suspicious enough.

No doubt about it: we live in a scary time… or full of opportunities, depending on how you look at it.

“I made it! A wonderful Project finished on time and on budget! And I even did that by the book, closely following the dictates of SDL and asking for validation from a panel composed by the most known Security Experts. It will be unbreakable!”

Who wouldn’t like to say those words? Well, I would for sure.

But wait, they have a dangerous seed in them: the feeling of un-breakability. It could be true, for a while, but it definitely is less and less so with the passing of time. This is an hateful truth, disliked by most stakeholders, but a truth anyhow.

If you follow every best practice strictly, you can minimize the quantity and hopefully the severity of vulnerabilities in your solution, but you will not be able to eradicate every one of them: in fact, even if you perform a 100% perfect job, removing all the vulnerabilities in your code – and this is something as close to impossibility as it gets – you will have to rely on third party components or systems, like the O.S. you are using to run your solution, and they will have vulnerabilities as well.

So, you can hope to deter most attackers, but not those skilled and highly funded hackers that are so much in the news nowadays. Your attention to security will delay them from accessing your data, though, hopefully for long.

So, what could you do with that knowledge? You could decide to accept that someone will pass your protections, you could decide that your data is not worth all the effort and bow to the mighty of the attackers giving up any defense (and knowing that they will publicly laugh at you and thank you for your shortsightedness) or you could accept that you are human and plan for the violations. Even if it could not be so apparent, all those three options are equally valuable and acceptable, depending on the circumstances, because they are thoughtful answers to an important question. Personally, I do not consider a wise position only the one that is undertaken without any musing, by instinct or by simple inaction.

By the way, it’s only natural that over time those attackers will find some crack in your most perfect protection, no matter what you do: they have all the resources in the world to do that, while you had only a limited amount to be shared between the many tasks needed to complete your Project. For the sake of discussion, let’s consider the third option, the one where a plan was concocted for handling failures and attacks. It would contain a description of how to detect an attack, how to answer to it in the urgency of the moment, and finally what to do to ensure that the same or similar vulnerabilities will not leveraged upon to violate your solution again. Planning for failure is also important when the Project is in the Design phase: this is because you would want to design your solution to be flexible enough, for example by applying concepts like Cryptographic Agility, which would allow changing your algorithms and keys periodically and with little effort.

You will also want to re-assess your application periodically, in light of the most recent types of attacks in the news. At some point the application will reach its natural end of life: security-wise, it would be when the risk of falling under an attack is too high. As said initially, it’s only normal that the security of the applications wears out over time, as an effect of the combined attacks – more attacks are performed, successful or not, more the application is known by the attackers and greater is the probability for someone to find an important vulnerability. Again, the wise thing is to plan for that: not planning for replacement and retirement of an application is like accepting that your solution will be taken over in due time. Right now, there are many organizations that are considering or performing the migration of hundreds of their old applications from O.S. out of support or about to go out of support, like Windows XP and Windows 2003 Server. Obsolescence hits also Line-of-Business Applications and should be taken under account: re-hosting them could be not the best thing to do, security-wise. In fact, it is not important if those applications are exposed to Internet or not, nor is the sensibility of the data it handles, because even intranet-only applications could be used as an intermediate step toward richer bounties.

So, when a Project is really finished? Well, when it is retired!

Computing means processing Data. Therefore, it is only natural that Data Protection is one of the most important things when you discuss the Security of an Application, regardless how it is implemented, where it is hosted and how it is maintained.

Protecting Data is like storing it in a safe: you will have to choose the type of protection and you will get a key, that is a token granting access to that Data only to you and to the users who are allowed to access it. This key could be in the form of an Identity, in the form of a cryptographic key or in any other form; this is not really important for the sake of our discussion.

The key is very important, even if not as much as Data. Its importance is not due to anything intrinsic in it: you could safely discard a Key under controlled situations, and none will complain with that. It is important only because it allows access to Data; sometimes, the Key is the only thing that keeps attackers from accessing your Data.

So, you will want to keep your key safe as well. And this is the really hard thing to do: how could you protect it without involving another key, and then another key, and then yet another key? Sometimes you can leverage on services offered by the Operating System, like Microsoft Windows Data Protection API (DPAPI), or store the keys in hardware modules like the Hardware Security Modules (HSM). But really, this is only part of the answer. That is, even if you find a way to protect your keys effectively, you need a way to provide them to any instance of your Application, if it is installed on multiple servers in multiple locations, and to be ready to replace the keys – and discard the old ones – when (not if!) they are suspected to have been compromised or for any other reason.

It is all too common not planning for Key Management tasks like those. In my experience, I have found more than a couple of customers which had not planned for that: typically they placed the keys in code as strings or as resources, without planning for when they need to be changed. The net result is that they are not changed, and sometimes they are the very same in the Development as in the Production environment. The only thing that is worse than that, is not to put in effect ways to understand when you are under attack and therefore you need to change the keys. It goes without saying that those customers were more than happy with the situation, because they had no clue about actually being under attack.

Much better to Assume Breach and act accordingly.

This is probably the biggest question, nowadays: should I jump in the latest hype in technology or wait a little bit more? A very common question and probably a difficult one to answer.

Let’s face it: it’s scary out there. When you surrender your own and your Customers’ data to a third party, typically to be hosted in a different Country, it’s only natural to wonder if it is trustworthy or not. Even worse, the danger could come from unexpected sources: you have to fear not only the Administrators appointed by the Cloud Provider to manage your data – that is a nightmare common enough – but also other Customers like you, using the same services. For example, very recently Amazon and Rackspace have been compelled to restart a number of their systems to patch them for a vulnerability in the Hypervisor technology they have embraced, Xen. The vulnerability would have allowed an application running in a guest Virtual Machine to crash the host or even to read its memory: this would have led to reading the memory of any of the other guests running in the same Server (see: Xen hypervisor found wanting in security).

So, what to do with that? Fear alone seem not to refrain people from publishing data on Internet. On the contrary, the number of people sharing freely their own data is growing by the day: Facebook, Twitter and LinkedIn come to mind as clear examples of this trend. Awareness of the risks involved in sharing data is also increasing: frauds big and small, phishing and also spying are everyone’s concern. Who doesn’t know about Heartbleed? Bing shows 32400 results, right now: that’s quite a common term, considering that it has been discovered only in April 2014. Shellshock is even more impressive: in less than one month, it has accumulated references by the millions! Most assuredly, not every reference refers to the actual bug, but those are impressive numbers, nevertheless.

But wait, are they news at all? Is it unheard of that there are bugs in code? Surely not. The first time an organization published the very first page on a network, it was the first time they opened a door for remote attacks. For sure, money attracts the attention of malevolent people, and this is even truer for the Cloud, because it can be at the same time a tool to perform misdeeds and also a huge treasure chest, ripe for the picking. But this is also true when you publish your application on Internet or when you give your data to an Outsourcer.

So, the issue is not the Cloud. Microsoft Azure, Amazon AWS and their cousins are only the most visible targets. Someone could say that they pose an additional risk, because they are so much in the news, but it’s arguable that you are safer not using them. The fact is that there are many reasons why any organization could be a target of someone else: hackers searching for a gain, by harassing you or your customers – you could be only a step of a greater attack – national agencies (NSA come to mind) or even disgruntled employees. The sad truth is that most organizations are target of attacks and that only some are aware of that, because most have not the right tools to understand the risk and identify attacks in a timely manner. For example, a customer of mine some times ago accidentally discovered a violation of its On-Premises Data Center, because one of the servers restarted without any apparent reason: the hackers were maintaining the compromised servers, installing software at their will, since long. This is not an isolated case: in literature you can find similar incidents by the thousands and the list grows by the day. Some of the most recent and famous violations are related to names like Target and Signature Systems.

So, the Cloud is not the issue, but the Cloud can be a part of the answer. It is common knowledge that the security of a system is determined by its least secure part. Cloud Providers make a point of managing their systems by the book, therefore they are (or should be) able to provide the most secure infrastructure (see: Microsoft Datacenter Tour (long version)). They are continuously target of attacks, but this maintain them vigilant and able to react promptly. They also strive to improve their security, trying to be a step ahead the bad guys. Surely, they are a target of more attacks than anyone else, considering that they are attacked not only as infrastructure providers but also because they host their customers’ data and applications. Nevertheless, this should not necessarily considered as a downside of the Cloud Providers, because it requires them to maintain top notch security over time. Can we, simple mortals, hope to achieve that level of security in our own Data Centers without investing a huge amount of resources?

But securing the infrastructure is hardly enough. With all the investments done in securing the Operative Systems and the Off-the-Shelves Applications in the past and continuing nowadays, attacks are focusing on custom Line-of-Business Applications. For example, with the adoption of a Security-oriented SDLC, Microsoft’s own Security Development Lifecycle (SDL), the vulnerabilities discovered after 3 years from RTM in two adjoining versions of SQL Server dropped of a sound 91% (see: Benefits of the SDL). Surely, ensuring that our applications are secure is not something achieved without a cost, but this is something we should consider as due in every project. Every Business Critical application should be developed with steps to ensure that it is Secure and that the data it manages are safe. In my experience, it is all too common that Security is taken as a given: something you want to be there but that you are not willing to pay for, and as a result you will not get. The typical behavior is to handle the incidents after the fact, when unimaginable damage has already been done and rushed damage recovery actions have to been performed.

Building your solution on the Cloud is like basing the next construction on strong foundations made by the best experts in the field. If you adopt sloppy methodologies on your part, the house will inevitably collapse under its weight and the inclemency of the bad weather; but if you use sound methodologies like SDL, you will build a construction that is strong and safe construction from its foundations to the roof.

And naturally, you will want to maintain it to ensure its safety over time… but this is a topic for another post.