Archives For attackers

It is very interesting to understand how attackers work, and sometimes it is also scary to see how unprepared we are. This in an unbalanced war, which we are losing.

Ransomware is on the rise, and it is more and more dangerous. But it is not the only problem. Many of my customers are totally unprepared, yet they say that they have not been compromised in the past, but for a couple of well known incidents. No wonder, considering that their detection controls are in some cases totally ineffective.

Sometimes customers have no clue of where their assets are or how they can be exploited. The most absurd thing to see is that many organizations have VIPs that are not tolerant toward the limitations imposed for Security reasons, and they have the power to require exemption: as a result, sometimes those who have the highest value for an organization are the least protected!

Attackers already know all this and understand your business better than you. They are going to find your weakest spots and to hit them, hard. Many are not able to see that coming and even less to respond properly.

FireEye’s incident response business further reports the mean “dwell time” for breaches in EMEA is 469 days, versus 146 globally.

Source: http://www.theregister.co.uk/2016/06/08/breach_trends_emea_mandiant/.

In other words, in EMEA the time an attacker on average remains undetected in a victim’s system, is more than 3 times higher than the World average!

We have to change this and soon, and it all starts from adopting a more active stance toward Security. It is not a cost: it is a necessity!

David Ferbrache from KPMG describes the situation very well, and SC Magazine has an article about it that can be both alarming and illuminating: 

http://www.scmagazineuk.com/businesses-should-certainly-work-closer-with-law-enforcement-as-well-as-partners-in-the-cyber-security-marketplace-mark-hughes-bt/article/507418/

Enjoy!

Google Project Zero

February 15, 2015 — Leave a comment

Zero Day Vulnerabilities are new Security Issues that are found in software and that could be exploited even before who made the Software knows about that.

Google has a project called Project Zero, which collects Zero Day Vulnerabilities and notifies the maker of the Software to allow it to fix those issues, before it is too late. Google’s policy is to publish the vulnerabilities, with all the details needed to exploit them, 90 days after disclosing it to the owner of the software.

Very recently (see http://www.theregister.co.uk/2015/02/14/google_vulnerability_disclosure_tweaks/), Google modified its policies to grant some more time to fix the issue. Then, Google publishes the vulnerability as soon as the owner publishes the fix or when the grace period of up to 105 days or so expires.

I surely welcome this softening of the policy, but is it enough?

It may be me, but I am sincerely puzzled about Google policy. In the real world, most organization have a tendency to delay the application of the fixes, even security ones; so, if Google publishes the detail about the vulnerability as soon the fix is published, even with working samples about how to exploit the vulnerability, it is only natural that an attacker would enjoy a grace period when most systems are unpatched. Who would benefit of Google disclosure, then?

What Google would have to do, then? The issue is not about disclosing or not the vulnerability. I fully agree that it is better to disclose them, but what is the reason why they have to give full details about the vulnerabilities? Would not be better to give generic information about the issue and to point to the fix, omitting the more practical details that could be leveraged even by the average Joe?

A Shared Responsibility

November 22, 2014 — Leave a comment

Applications are more and more subject to be integrated with other applications. Clear examples are Social Networks like Facebook, Twitter and LinkedIn: it’s very common to see links between them, as well as other applications integrating with Social Networks. This interconnection is so important that it has involved many Enterprise applications as well.

The net result is that the relationships between applications are defining a network, where each one of them takes a role that can big or small depending on the application characteristics, but it is an important role nevertheless.

This Network of Applications defines a new Internet, quite different from the one it was when all this started, and this new Internet is so interconnected and pervasive that it includes directly or indirectly a big part of many (most? all?) Enterprise’s infrastructures as well. We do live in the Cloud’s Era, don’t we?

This interconnected thing reminds me of our brain and of our body by extent, not only because it is clearly a parallel to the synapses, but also because it is subject to illness as well. The more I think about it, the more the dynamic of most of the current attacks shows clear similarities with the propagation of a virus in an organic body: you start with a localized infection – a system or two are compromised – then it spreads to some adjacent systems and voilà! You have a serious illness that has gained control of the attacked body. This is very like to how Advanced Persistent Threats go, and to attacks like the infamous Pass-The-Hash. The idea behind those attacks is to gain access to the real prize a step at a time, without rushing to it, trying to consolidate your position within the attacked infrastructure before someone detects you.

The main difference between the organic body and many pieces of this Network of Applications is that the latter have not yet developed the antibodies needed to detect the attacks, and therefore it is even less able to vanquish those attacks. This weakness allow compromising entire Enterprise Networks starting from a single Client and, as a consequence, gaining access to strategic resources like the Domain Controllers, through a series of patient intermediate steps.

A single weakness allows the first step; the others let the castle collapse.

If we extend those concepts to the whole Internet as a Network of Applications, it is clear that nowadays attackers have plenty of choices about how to attack and gain control of a System, if necessarily starting from a very far vulnerable point. Target’s attackers started from a supplier, for example.

One of the principles of Security is that a system is as secure as its “weakest link”. This sentence implies that the said system can be represented as a chain, where data is processed linearly. But what if you have a multi-dimensional reality, where each node could potentially talk with any other one? You rapidly have an headache… and a big opportunity for any potential attacker.

In this scenario, there is only a feasible answer to the quest for Security: that any actor considers creating Secure Applications and to maintain their security over time as a personal responsibility toward its customers, toward its peers, toward Internet as a whole and toward itself.

Do you need a server? Perhaps one hosted by an important Corporation? No problem, there is a service for that (no, not an App)… a service provided by Hackers.

Drupal Servers have very recently be compromised (see: Attackers Exploit Drupal Vulnerability) and sold to other malicious people. The hilarious part is that the attackers even patched the compromised systems, in their case to protect against further attacks, but effectively doing a better job than the actual Administrators.

This is not news, I freely admit it: this has happened in the past and will happen again and again. Nevertheless, I find it to be quite hilarious, because attackers sometimes demonstrate great entrepreneurial spirit and technical ability, sometimes even better that their victims. Like those hackers that offered a guarantee to replace a compromised server with another, if the one assigned to you had been cleaned meanwhile or for any other problem (see: Service Sells Access to Fortune 500 Firms).

Many customers I have seen in the past are affected by a conflict between Developers and some IT Department.

In most situations, this was caused by the need for the IT Department to impose restrictions in what the Developers could do, to simplify their management activities and limit the consequences of improper behavior of the Applications on what is already hosted in the same environment. Developers hardly participate in defining those restrictions: they mainly are subject to them; sometimes this lead to big issues because those restrictions give into the way of the Developers’ job without being really understood by them.

The net result of this situation is that Developers try to get control of their own Applications as much as possible, against any will of the IT Department. It’s all too common in these scenarios for Developers to try and trick the IT Department, for example to gain direct control to the Application logs. What is more important than to act rapidly when a critical business process fails badly? When this happens, Developers simply cannot afford to ask the IT Department for each single piece of information needed to discover what happens, and therefore if the IT Department does not allow direct access to data and to the systems, they consider opening doors to get to that data nonetheless. This is only an example of how Developers try to find a way to achieve their goal by exploiting the holes in the IT Department guard, without thinking to the consequences. In a sense, this is like hacking your own system adding back doors.  Not a very smart thing to do, isn’t it? Particularly if you consider that the bad people could benefit of those back doors as well.

At the end of the day, each side has its own reasons and considers them more relevant than the other’s, but both do a bad service to their Company when they do not team up since the very start of the project, to define how the Application should behave to be a good citizen of the hosting environment, and at the same time to be able to satisfy all its requirements. It is way better to talk openly from the start, instead of starting (or continuing) a cold war that ultimately benefits only who attacks your systems and applications.

“I made it! A wonderful Project finished on time and on budget! And I even did that by the book, closely following the dictates of SDL and asking for validation from a panel composed by the most known Security Experts. It will be unbreakable!”

Who wouldn’t like to say those words? Well, I would for sure.

But wait, they have a dangerous seed in them: the feeling of un-breakability. It could be true, for a while, but it definitely is less and less so with the passing of time. This is an hateful truth, disliked by most stakeholders, but a truth anyhow.

If you follow every best practice strictly, you can minimize the quantity and hopefully the severity of vulnerabilities in your solution, but you will not be able to eradicate every one of them: in fact, even if you perform a 100% perfect job, removing all the vulnerabilities in your code – and this is something as close to impossibility as it gets – you will have to rely on third party components or systems, like the O.S. you are using to run your solution, and they will have vulnerabilities as well.

So, you can hope to deter most attackers, but not those skilled and highly funded hackers that are so much in the news nowadays. Your attention to security will delay them from accessing your data, though, hopefully for long.

So, what could you do with that knowledge? You could decide to accept that someone will pass your protections, you could decide that your data is not worth all the effort and bow to the mighty of the attackers giving up any defense (and knowing that they will publicly laugh at you and thank you for your shortsightedness) or you could accept that you are human and plan for the violations. Even if it could not be so apparent, all those three options are equally valuable and acceptable, depending on the circumstances, because they are thoughtful answers to an important question. Personally, I do not consider a wise position only the one that is undertaken without any musing, by instinct or by simple inaction.

By the way, it’s only natural that over time those attackers will find some crack in your most perfect protection, no matter what you do: they have all the resources in the world to do that, while you had only a limited amount to be shared between the many tasks needed to complete your Project. For the sake of discussion, let’s consider the third option, the one where a plan was concocted for handling failures and attacks. It would contain a description of how to detect an attack, how to answer to it in the urgency of the moment, and finally what to do to ensure that the same or similar vulnerabilities will not leveraged upon to violate your solution again. Planning for failure is also important when the Project is in the Design phase: this is because you would want to design your solution to be flexible enough, for example by applying concepts like Cryptographic Agility, which would allow changing your algorithms and keys periodically and with little effort.

You will also want to re-assess your application periodically, in light of the most recent types of attacks in the news. At some point the application will reach its natural end of life: security-wise, it would be when the risk of falling under an attack is too high. As said initially, it’s only normal that the security of the applications wears out over time, as an effect of the combined attacks – more attacks are performed, successful or not, more the application is known by the attackers and greater is the probability for someone to find an important vulnerability. Again, the wise thing is to plan for that: not planning for replacement and retirement of an application is like accepting that your solution will be taken over in due time. Right now, there are many organizations that are considering or performing the migration of hundreds of their old applications from O.S. out of support or about to go out of support, like Windows XP and Windows 2003 Server. Obsolescence hits also Line-of-Business Applications and should be taken under account: re-hosting them could be not the best thing to do, security-wise. In fact, it is not important if those applications are exposed to Internet or not, nor is the sensibility of the data it handles, because even intranet-only applications could be used as an intermediate step toward richer bounties.

So, when a Project is really finished? Well, when it is retired!