The big surprise I hinted at the end of my Restarting article is out!

It is a new tool, which complements the workflow of Microsoft Threat Modeling Tool 2016, by providing features specifically designed to optimize the Mitigation experience.

The improvements in efficiency can be really huge, depending on the complexity of the model (the higher the better!), on the template and on the maturity of the organization: an estimation done with the standard template implies the possibility to optimize for 60% or more!

I have done everything I could to provide you with the best possible solution, given my limited resources: this is a project I have developed in my spare time. So, please, any costructive feedback would be much appreciated.

The details have been collected in a specific page, called The Threats Manager Tool, which can be accessed also from the menu at the top of my Blog site.

And the best thing is… that it is entirely free!

Enjoy!

It is very interesting to understand how attackers work, and sometimes it is also scary to see how unprepared we are. This in an unbalanced war, which we are losing.

Ransomware is on the rise, and it is more and more dangerous. But it is not the only problem. Many of my customers are totally unprepared, yet they say that they have not been compromised in the past, but for a couple of well known incidents. No wonder, considering that their detection controls are in some cases totally ineffective.

Sometimes customers have no clue of where their assets are or how they can be exploited. The most absurd thing to see is that many organizations have VIPs that are not tolerant toward the limitations imposed for Security reasons, and they have the power to require exemption: as a result, sometimes those who have the highest value for an organization are the least protected!

Attackers already know all this and understand your business better than you. They are going to find your weakest spots and to hit them, hard. Many are not able to see that coming and even less to respond properly.

FireEye’s incident response business further reports the mean “dwell time” for breaches in EMEA is 469 days, versus 146 globally.

Source: http://www.theregister.co.uk/2016/06/08/breach_trends_emea_mandiant/.

In other words, in EMEA the time an attacker on average remains undetected in a victim’s system, is more than 3 times higher than the World average!

We have to change this and soon, and it all starts from adopting a more active stance toward Security. It is not a cost: it is a necessity!

David Ferbrache from KPMG describes the situation very well, and SC Magazine has an article about it that can be both alarming and illuminating: 

http://www.scmagazineuk.com/businesses-should-certainly-work-closer-with-law-enforcement-as-well-as-partners-in-the-cyber-security-marketplace-mark-hughes-bt/article/507418/

Enjoy!

Restarting

June 26, 2016 — Leave a comment

Life prepares you many surprises. You can plan your life as accurately as humanly possible, but you will eventually need to reconsider your plans.
It is not necessarily a bad thing, though. And in my case, I trust it has been for the better.
After I switched to Proactive Support from the Consulting organization within Microsoft, to focus 100% of my time to Security, I planned to improve all my initiatives, blog included. I discovered soon that I needed to dedicate all myself to cope with my deficiencies around Infrastructural Security and to meet the already challenging goals my new organization has defined for me, plus the goals I defined for myself, that is mostly doing whatever I can to expand the importance of Application Security within Microsoft.
This has meant much additional work for me, but also very important successes, like achieving the CSSLP certification (finally!) and recognitions like having been made WW Lead of the Security Development Lifecycle Community within Microsoft, side by side with my good friend Kiyoshi Watanabe, who is an Application Security expert from Japan.
New duties mean more things to do and less time for other things, as I discovered last year. This is the reason why I have written nothing since I switched: I had no juice left to think to anything but the most essential things.
Now it is different: my goals for the new year are set to be more projected toward the community of Application Security practicians inside and outside Microsoft, because I feel it to be critical for the world I live in and my own future.
Stay tuned for new content and (possibly) a very big surprise, soon. 🙂

Security is really a thing of Passion: it is so huge a topic and evolving so fast, that you have to fully commit yourself to it if you want to do it properly. So, it is very important to build a solid foundation on which to grow and expand your knowledge.

Security is also a Community thing: you have to be fully connected if you want to be up to date and to create the trust around you needed to do your job. And one of the most important things if you want to be part of a Community, is to know its “Lingo”.

This is precisely what (ISC)2 is and what it does. First and foremost, it is a Community of Security Professionals, which collects common knowledge around the main Security Topics. It defines some of the most recognized Security Certifications, like CISSP and SSCP, and collaborates with many Security Organizations to provide continuous training to its members.

One of those certifications is particularly relevant for Software Development: this is CSSLP. I have studied it and I am in the process of obtaining this certification, therefore I have grown some strong feelings on that and the various tools provided to achieve it.

CSSLP is currently in its second incarnation, and it is composed of 8 Domains (as described in the (ISC)2 site):

The first incarnation lacked the last Domain.

All in all, this represents a fully holistic approach for Software Development, based on proven concepts and tools (many from Microsoft’s own SDL!) and provides a very good overview of the main topics to be considered by Architects and Software Developers. There are also some key concepts that I have seen here for the first time so clearly exposed, like the reason why you have to keep your software behaves like cheese: after some time it stinks and you have to replace it! In other words, you have to plan for its retirement even before it is released! The 7th Domain discuss specifically this concept.

So, it is really key to study for CSSLP even if you are not planning to certify, because it gives you some important tools for understanding that needs to be done.

Speaking of which, the next question is: how do you study for CSSLP?

I have seen some tools, in the quest for the certification, which I am going to briefly discuss here: I will probably expand on some of them in the near future.

First of all, there  is the Official (ISC)2 Guide to the CSSLP CBK, Second Edition: this is the official book from (ISC)2 and it is a good starting point. I would say that the quality is average: I have found some inaccuracies and some parts are oversimplified.

A better reference could be the CSSLP Certification All-in-One Exam Guide. This is an unofficial book covering the original 7 Domains. I have read most of it and its content is really good, but its lack of coverage for the last Domain is a pity. I would recommend buying it as first book if you do not want to certify for CSSLP, otherwise it would be a good integration of the official book.

There are also additional tools, for greater budgets: the first one I would get, if you can afford it, would be the Security Compass CSSLP Training. This is a comprehensive course on every Domain of CSSLP, in CBT form: it is very convenient and its length feels ok, being around 10 hours. I have completed it and I can say that contains good material, well explained and fully understandable; now and then, there are some simple exercises to test your knowledge. Even if the course is definitely mature, there are some glitches, but they are regularly fixed and the support is fast in helping if there is any need. Even if full of goods, this Security Compass training cannot be considered a complete solution for trying to certify. First of all, the exercises are not nearly enough to have a feeling of the certification: it would be great if Security Compass would supplement it with some sample questions that would simulate the actual certification. Secondary, I would have liked the ability to download the course material, to consume it offline: this is not possible.

Speaking of test simulation, fortunately there are a couple of tools provided by (ISC)2 to enjoy some actual questions:

The first one is a good solution and provides up to 300 real questions – not actual questions, but something that has been used in the past or that is really similar to actual questions from the exam, but comes with a cost. The iOS App is way much cheaper but provides a very limited set of questions.

Last but not least, you could use some Training in class or online (a recent addition to the (ISC)2 offering), but this comes with greater costs and imposes some toll on your schedule.

Concluding this roundup, I can definitely say that (ISC)2 certifications are a really good opportunity for entering the Security Community from the front door, to achieve credibility and to gain some very good tools and reasons to keep yourself up to date and committed to Security.

Changes

April 24, 2015 — Leave a comment

When someone offers you the opportunity to follow your passions, you have only a possible question: where should I sign?

This is what happened to me.

The net result is that I have forfeited my old role as an Architect for Microsoft Consulting Services to embrace Microsoft Premier, and my role now is Senior PFE Security. Security is the keyword, the passphrase that had me locked in. Now, I will be able to fully exercise and improve my knowledge around Security in general, to help Microsoft Customers in using Microsoft Products, Technologies and Processes (like SDL) safely and correctly.

I know that my Blog has been a little too silent, in the last couple of months. Expect this to change in the near future.

Google Project Zero

February 15, 2015 — Leave a comment

Zero Day Vulnerabilities are new Security Issues that are found in software and that could be exploited even before who made the Software knows about that.

Google has a project called Project Zero, which collects Zero Day Vulnerabilities and notifies the maker of the Software to allow it to fix those issues, before it is too late. Google’s policy is to publish the vulnerabilities, with all the details needed to exploit them, 90 days after disclosing it to the owner of the software.

Very recently (see http://www.theregister.co.uk/2015/02/14/google_vulnerability_disclosure_tweaks/), Google modified its policies to grant some more time to fix the issue. Then, Google publishes the vulnerability as soon as the owner publishes the fix or when the grace period of up to 105 days or so expires.

I surely welcome this softening of the policy, but is it enough?

It may be me, but I am sincerely puzzled about Google policy. In the real world, most organization have a tendency to delay the application of the fixes, even security ones; so, if Google publishes the detail about the vulnerability as soon the fix is published, even with working samples about how to exploit the vulnerability, it is only natural that an attacker would enjoy a grace period when most systems are unpatched. Who would benefit of Google disclosure, then?

What Google would have to do, then? The issue is not about disclosing or not the vulnerability. I fully agree that it is better to disclose them, but what is the reason why they have to give full details about the vulnerabilities? Would not be better to give generic information about the issue and to point to the fix, omitting the more practical details that could be leveraged even by the average Joe?

A Shared Responsibility

November 22, 2014 — Leave a comment

Applications are more and more subject to be integrated with other applications. Clear examples are Social Networks like Facebook, Twitter and LinkedIn: it’s very common to see links between them, as well as other applications integrating with Social Networks. This interconnection is so important that it has involved many Enterprise applications as well.

The net result is that the relationships between applications are defining a network, where each one of them takes a role that can big or small depending on the application characteristics, but it is an important role nevertheless.

This Network of Applications defines a new Internet, quite different from the one it was when all this started, and this new Internet is so interconnected and pervasive that it includes directly or indirectly a big part of many (most? all?) Enterprise’s infrastructures as well. We do live in the Cloud’s Era, don’t we?

This interconnected thing reminds me of our brain and of our body by extent, not only because it is clearly a parallel to the synapses, but also because it is subject to illness as well. The more I think about it, the more the dynamic of most of the current attacks shows clear similarities with the propagation of a virus in an organic body: you start with a localized infection – a system or two are compromised – then it spreads to some adjacent systems and voilà! You have a serious illness that has gained control of the attacked body. This is very like to how Advanced Persistent Threats go, and to attacks like the infamous Pass-The-Hash. The idea behind those attacks is to gain access to the real prize a step at a time, without rushing to it, trying to consolidate your position within the attacked infrastructure before someone detects you.

The main difference between the organic body and many pieces of this Network of Applications is that the latter have not yet developed the antibodies needed to detect the attacks, and therefore it is even less able to vanquish those attacks. This weakness allow compromising entire Enterprise Networks starting from a single Client and, as a consequence, gaining access to strategic resources like the Domain Controllers, through a series of patient intermediate steps.

A single weakness allows the first step; the others let the castle collapse.

If we extend those concepts to the whole Internet as a Network of Applications, it is clear that nowadays attackers have plenty of choices about how to attack and gain control of a System, if necessarily starting from a very far vulnerable point. Target’s attackers started from a supplier, for example.

One of the principles of Security is that a system is as secure as its “weakest link”. This sentence implies that the said system can be represented as a chain, where data is processed linearly. But what if you have a multi-dimensional reality, where each node could potentially talk with any other one? You rapidly have an headache… and a big opportunity for any potential attacker.

In this scenario, there is only a feasible answer to the quest for Security: that any actor considers creating Secure Applications and to maintain their security over time as a personal responsibility toward its customers, toward its peers, toward Internet as a whole and toward itself.

Is that thing Secure?

November 14, 2014 — Leave a comment

A colleague of mine has just asked me if WebView, the control that is shipped as part of the Windows 8.1 SDK, is Secure. His customer has expressed a doubt about it, probably due to serious issues with a similar component built on older technology (see: Microsoft Security Bulletin MS06-057 – Critical).

The interesting fact, here, is not about the specific issue: it is about the concept of Security. That is, a control like WebView builds upon a browser, Internet Explorer, to allow integrating web navigation within an application: this means that the application that uses the control inherits all the faults and issues in Internet Explorer, plus those in the control itself. On the other hand, this is part of Products that are maintained over time by a Corporation that is very serious when Security is concerned (see: Life in The Digital Crosshairs), a control that is used by many developers on many applications, therefore it will necessarily be more secure than anything the average Joe can cook on his own.

So, is that thing Secure? I hate to say so, but… it depends. It depends on what are you trying to accomplish, on the characteristics of data you are working on, depends on the abilities of your Team and on your budget and on many other factors.

The sad truth is that Security is a rogue concept: it does not allow absolutes and it wears down quickly. In other words, you have to stick with “Secure enough” and continuously invest to fight against bugs to maintain the status of your Application’s Security at an acceptable level.

Do you need a server? Perhaps one hosted by an important Corporation? No problem, there is a service for that (no, not an App)… a service provided by Hackers.

Drupal Servers have very recently be compromised (see: Attackers Exploit Drupal Vulnerability) and sold to other malicious people. The hilarious part is that the attackers even patched the compromised systems, in their case to protect against further attacks, but effectively doing a better job than the actual Administrators.

This is not news, I freely admit it: this has happened in the past and will happen again and again. Nevertheless, I find it to be quite hilarious, because attackers sometimes demonstrate great entrepreneurial spirit and technical ability, sometimes even better that their victims. Like those hackers that offered a guarantee to replace a compromised server with another, if the one assigned to you had been cleaned meanwhile or for any other problem (see: Service Sells Access to Fortune 500 Firms).

Many customers I have seen in the past are affected by a conflict between Developers and some IT Department.

In most situations, this was caused by the need for the IT Department to impose restrictions in what the Developers could do, to simplify their management activities and limit the consequences of improper behavior of the Applications on what is already hosted in the same environment. Developers hardly participate in defining those restrictions: they mainly are subject to them; sometimes this lead to big issues because those restrictions give into the way of the Developers’ job without being really understood by them.

The net result of this situation is that Developers try to get control of their own Applications as much as possible, against any will of the IT Department. It’s all too common in these scenarios for Developers to try and trick the IT Department, for example to gain direct control to the Application logs. What is more important than to act rapidly when a critical business process fails badly? When this happens, Developers simply cannot afford to ask the IT Department for each single piece of information needed to discover what happens, and therefore if the IT Department does not allow direct access to data and to the systems, they consider opening doors to get to that data nonetheless. This is only an example of how Developers try to find a way to achieve their goal by exploiting the holes in the IT Department guard, without thinking to the consequences. In a sense, this is like hacking your own system adding back doors.  Not a very smart thing to do, isn’t it? Particularly if you consider that the bad people could benefit of those back doors as well.

At the end of the day, each side has its own reasons and considers them more relevant than the other’s, but both do a bad service to their Company when they do not team up since the very start of the project, to define how the Application should behave to be a good citizen of the hosting environment, and at the same time to be able to satisfy all its requirements. It is way better to talk openly from the start, instead of starting (or continuing) a cold war that ultimately benefits only who attacks your systems and applications.