Archives For Secure SDLC

The big surprise I hinted at the end of my Restarting article is out!

It is a new tool, which complements the workflow of Microsoft Threat Modeling Tool 2016, by providing features specifically designed to optimize the Mitigation experience.

The improvements in efficiency can be really huge, depending on the complexity of the model (the higher the better!), on the template and on the maturity of the organization: an estimation done with the standard template implies the possibility to optimize for 60% or more!

I have done everything I could to provide you with the best possible solution, given my limited resources: this is a project I have developed in my spare time. So, please, any costructive feedback would be much appreciated.

The details have been collected in a specific page, called The Threats Manager Tool, which can be accessed also from the menu at the top of my Blog site.

And the best thing is… that it is entirely free!

Enjoy!

Security is really a thing of Passion: it is so huge a topic and evolving so fast, that you have to fully commit yourself to it if you want to do it properly. So, it is very important to build a solid foundation on which to grow and expand your knowledge.

Security is also a Community thing: you have to be fully connected if you want to be up to date and to create the trust around you needed to do your job. And one of the most important things if you want to be part of a Community, is to know its “Lingo”.

This is precisely what (ISC)2 is and what it does. First and foremost, it is a Community of Security Professionals, which collects common knowledge around the main Security Topics. It defines some of the most recognized Security Certifications, like CISSP and SSCP, and collaborates with many Security Organizations to provide continuous training to its members.

One of those certifications is particularly relevant for Software Development: this is CSSLP. I have studied it and I am in the process of obtaining this certification, therefore I have grown some strong feelings on that and the various tools provided to achieve it.

CSSLP is currently in its second incarnation, and it is composed of 8 Domains (as described in the (ISC)2 site):

The first incarnation lacked the last Domain.

All in all, this represents a fully holistic approach for Software Development, based on proven concepts and tools (many from Microsoft’s own SDL!) and provides a very good overview of the main topics to be considered by Architects and Software Developers. There are also some key concepts that I have seen here for the first time so clearly exposed, like the reason why you have to keep your software behaves like cheese: after some time it stinks and you have to replace it! In other words, you have to plan for its retirement even before it is released! The 7th Domain discuss specifically this concept.

So, it is really key to study for CSSLP even if you are not planning to certify, because it gives you some important tools for understanding that needs to be done.

Speaking of which, the next question is: how do you study for CSSLP?

I have seen some tools, in the quest for the certification, which I am going to briefly discuss here: I will probably expand on some of them in the near future.

First of all, there  is the Official (ISC)2 Guide to the CSSLP CBK, Second Edition: this is the official book from (ISC)2 and it is a good starting point. I would say that the quality is average: I have found some inaccuracies and some parts are oversimplified.

A better reference could be the CSSLP Certification All-in-One Exam Guide. This is an unofficial book covering the original 7 Domains. I have read most of it and its content is really good, but its lack of coverage for the last Domain is a pity. I would recommend buying it as first book if you do not want to certify for CSSLP, otherwise it would be a good integration of the official book.

There are also additional tools, for greater budgets: the first one I would get, if you can afford it, would be the Security Compass CSSLP Training. This is a comprehensive course on every Domain of CSSLP, in CBT form: it is very convenient and its length feels ok, being around 10 hours. I have completed it and I can say that contains good material, well explained and fully understandable; now and then, there are some simple exercises to test your knowledge. Even if the course is definitely mature, there are some glitches, but they are regularly fixed and the support is fast in helping if there is any need. Even if full of goods, this Security Compass training cannot be considered a complete solution for trying to certify. First of all, the exercises are not nearly enough to have a feeling of the certification: it would be great if Security Compass would supplement it with some sample questions that would simulate the actual certification. Secondary, I would have liked the ability to download the course material, to consume it offline: this is not possible.

Speaking of test simulation, fortunately there are a couple of tools provided by (ISC)2 to enjoy some actual questions:

The first one is a good solution and provides up to 300 real questions – not actual questions, but something that has been used in the past or that is really similar to actual questions from the exam, but comes with a cost. The iOS App is way much cheaper but provides a very limited set of questions.

Last but not least, you could use some Training in class or online (a recent addition to the (ISC)2 offering), but this comes with greater costs and imposes some toll on your schedule.

Concluding this roundup, I can definitely say that (ISC)2 certifications are a really good opportunity for entering the Security Community from the front door, to achieve credibility and to gain some very good tools and reasons to keep yourself up to date and committed to Security.

Many customers I have seen in the past are affected by a conflict between Developers and some IT Department.

In most situations, this was caused by the need for the IT Department to impose restrictions in what the Developers could do, to simplify their management activities and limit the consequences of improper behavior of the Applications on what is already hosted in the same environment. Developers hardly participate in defining those restrictions: they mainly are subject to them; sometimes this lead to big issues because those restrictions give into the way of the Developers’ job without being really understood by them.

The net result of this situation is that Developers try to get control of their own Applications as much as possible, against any will of the IT Department. It’s all too common in these scenarios for Developers to try and trick the IT Department, for example to gain direct control to the Application logs. What is more important than to act rapidly when a critical business process fails badly? When this happens, Developers simply cannot afford to ask the IT Department for each single piece of information needed to discover what happens, and therefore if the IT Department does not allow direct access to data and to the systems, they consider opening doors to get to that data nonetheless. This is only an example of how Developers try to find a way to achieve their goal by exploiting the holes in the IT Department guard, without thinking to the consequences. In a sense, this is like hacking your own system adding back doors.  Not a very smart thing to do, isn’t it? Particularly if you consider that the bad people could benefit of those back doors as well.

At the end of the day, each side has its own reasons and considers them more relevant than the other’s, but both do a bad service to their Company when they do not team up since the very start of the project, to define how the Application should behave to be a good citizen of the hosting environment, and at the same time to be able to satisfy all its requirements. It is way better to talk openly from the start, instead of starting (or continuing) a cold war that ultimately benefits only who attacks your systems and applications.

“I made it! A wonderful Project finished on time and on budget! And I even did that by the book, closely following the dictates of SDL and asking for validation from a panel composed by the most known Security Experts. It will be unbreakable!”

Who wouldn’t like to say those words? Well, I would for sure.

But wait, they have a dangerous seed in them: the feeling of un-breakability. It could be true, for a while, but it definitely is less and less so with the passing of time. This is an hateful truth, disliked by most stakeholders, but a truth anyhow.

If you follow every best practice strictly, you can minimize the quantity and hopefully the severity of vulnerabilities in your solution, but you will not be able to eradicate every one of them: in fact, even if you perform a 100% perfect job, removing all the vulnerabilities in your code – and this is something as close to impossibility as it gets – you will have to rely on third party components or systems, like the O.S. you are using to run your solution, and they will have vulnerabilities as well.

So, you can hope to deter most attackers, but not those skilled and highly funded hackers that are so much in the news nowadays. Your attention to security will delay them from accessing your data, though, hopefully for long.

So, what could you do with that knowledge? You could decide to accept that someone will pass your protections, you could decide that your data is not worth all the effort and bow to the mighty of the attackers giving up any defense (and knowing that they will publicly laugh at you and thank you for your shortsightedness) or you could accept that you are human and plan for the violations. Even if it could not be so apparent, all those three options are equally valuable and acceptable, depending on the circumstances, because they are thoughtful answers to an important question. Personally, I do not consider a wise position only the one that is undertaken without any musing, by instinct or by simple inaction.

By the way, it’s only natural that over time those attackers will find some crack in your most perfect protection, no matter what you do: they have all the resources in the world to do that, while you had only a limited amount to be shared between the many tasks needed to complete your Project. For the sake of discussion, let’s consider the third option, the one where a plan was concocted for handling failures and attacks. It would contain a description of how to detect an attack, how to answer to it in the urgency of the moment, and finally what to do to ensure that the same or similar vulnerabilities will not leveraged upon to violate your solution again. Planning for failure is also important when the Project is in the Design phase: this is because you would want to design your solution to be flexible enough, for example by applying concepts like Cryptographic Agility, which would allow changing your algorithms and keys periodically and with little effort.

You will also want to re-assess your application periodically, in light of the most recent types of attacks in the news. At some point the application will reach its natural end of life: security-wise, it would be when the risk of falling under an attack is too high. As said initially, it’s only normal that the security of the applications wears out over time, as an effect of the combined attacks – more attacks are performed, successful or not, more the application is known by the attackers and greater is the probability for someone to find an important vulnerability. Again, the wise thing is to plan for that: not planning for replacement and retirement of an application is like accepting that your solution will be taken over in due time. Right now, there are many organizations that are considering or performing the migration of hundreds of their old applications from O.S. out of support or about to go out of support, like Windows XP and Windows 2003 Server. Obsolescence hits also Line-of-Business Applications and should be taken under account: re-hosting them could be not the best thing to do, security-wise. In fact, it is not important if those applications are exposed to Internet or not, nor is the sensibility of the data it handles, because even intranet-only applications could be used as an intermediate step toward richer bounties.

So, when a Project is really finished? Well, when it is retired!

This is probably the biggest question, nowadays: should I jump in the latest hype in technology or wait a little bit more? A very common question and probably a difficult one to answer.

Let’s face it: it’s scary out there. When you surrender your own and your Customers’ data to a third party, typically to be hosted in a different Country, it’s only natural to wonder if it is trustworthy or not. Even worse, the danger could come from unexpected sources: you have to fear not only the Administrators appointed by the Cloud Provider to manage your data – that is a nightmare common enough – but also other Customers like you, using the same services. For example, very recently Amazon and Rackspace have been compelled to restart a number of their systems to patch them for a vulnerability in the Hypervisor technology they have embraced, Xen. The vulnerability would have allowed an application running in a guest Virtual Machine to crash the host or even to read its memory: this would have led to reading the memory of any of the other guests running in the same Server (see: Xen hypervisor found wanting in security).

So, what to do with that? Fear alone seem not to refrain people from publishing data on Internet. On the contrary, the number of people sharing freely their own data is growing by the day: Facebook, Twitter and LinkedIn come to mind as clear examples of this trend. Awareness of the risks involved in sharing data is also increasing: frauds big and small, phishing and also spying are everyone’s concern. Who doesn’t know about Heartbleed? Bing shows 32400 results, right now: that’s quite a common term, considering that it has been discovered only in April 2014. Shellshock is even more impressive: in less than one month, it has accumulated references by the millions! Most assuredly, not every reference refers to the actual bug, but those are impressive numbers, nevertheless.

But wait, are they news at all? Is it unheard of that there are bugs in code? Surely not. The first time an organization published the very first page on a network, it was the first time they opened a door for remote attacks. For sure, money attracts the attention of malevolent people, and this is even truer for the Cloud, because it can be at the same time a tool to perform misdeeds and also a huge treasure chest, ripe for the picking. But this is also true when you publish your application on Internet or when you give your data to an Outsourcer.

So, the issue is not the Cloud. Microsoft Azure, Amazon AWS and their cousins are only the most visible targets. Someone could say that they pose an additional risk, because they are so much in the news, but it’s arguable that you are safer not using them. The fact is that there are many reasons why any organization could be a target of someone else: hackers searching for a gain, by harassing you or your customers – you could be only a step of a greater attack – national agencies (NSA come to mind) or even disgruntled employees. The sad truth is that most organizations are target of attacks and that only some are aware of that, because most have not the right tools to understand the risk and identify attacks in a timely manner. For example, a customer of mine some times ago accidentally discovered a violation of its On-Premises Data Center, because one of the servers restarted without any apparent reason: the hackers were maintaining the compromised servers, installing software at their will, since long. This is not an isolated case: in literature you can find similar incidents by the thousands and the list grows by the day. Some of the most recent and famous violations are related to names like Target and Signature Systems.

So, the Cloud is not the issue, but the Cloud can be a part of the answer. It is common knowledge that the security of a system is determined by its least secure part. Cloud Providers make a point of managing their systems by the book, therefore they are (or should be) able to provide the most secure infrastructure (see: Microsoft Datacenter Tour (long version)). They are continuously target of attacks, but this maintain them vigilant and able to react promptly. They also strive to improve their security, trying to be a step ahead the bad guys. Surely, they are a target of more attacks than anyone else, considering that they are attacked not only as infrastructure providers but also because they host their customers’ data and applications. Nevertheless, this should not necessarily considered as a downside of the Cloud Providers, because it requires them to maintain top notch security over time. Can we, simple mortals, hope to achieve that level of security in our own Data Centers without investing a huge amount of resources?

But securing the infrastructure is hardly enough. With all the investments done in securing the Operative Systems and the Off-the-Shelves Applications in the past and continuing nowadays, attacks are focusing on custom Line-of-Business Applications. For example, with the adoption of a Security-oriented SDLC, Microsoft’s own Security Development Lifecycle (SDL), the vulnerabilities discovered after 3 years from RTM in two adjoining versions of SQL Server dropped of a sound 91% (see: Benefits of the SDL). Surely, ensuring that our applications are secure is not something achieved without a cost, but this is something we should consider as due in every project. Every Business Critical application should be developed with steps to ensure that it is Secure and that the data it manages are safe. In my experience, it is all too common that Security is taken as a given: something you want to be there but that you are not willing to pay for, and as a result you will not get. The typical behavior is to handle the incidents after the fact, when unimaginable damage has already been done and rushed damage recovery actions have to been performed.

Building your solution on the Cloud is like basing the next construction on strong foundations made by the best experts in the field. If you adopt sloppy methodologies on your part, the house will inevitably collapse under its weight and the inclemency of the bad weather; but if you use sound methodologies like SDL, you will build a construction that is strong and safe construction from its foundations to the roof.

And naturally, you will want to maintain it to ensure its safety over time… but this is a topic for another post.