In this fourth article in the Threat Modeling vNext series, we continue our examination of the quality criteria for Threat Models by analyzing more in depth their ultimate goal, that is minimizing the Residual Risk.
With Residual Risk we indicate the risk represented by some solution after having applied every mitigation identified by the Threat Model and deemed necessary has been implemented in Production. It is important to note that this definition takes out the Mitigations identified by the Threat Model but that have not be deemed necessary: therefore convincing your Stakeholders about the need of a mitigation represents an important part of the Threat Model experience. As anticipated in the previous article, this does not mean that you need to lie to convince your stakeholders about the need of implementing every identified mitigation; instead, you need to maintain trust by demonstrating integrity, so that your recommendations are valued and followed. A big part of this is played by adopting a balanced approach, attempting to steer away from any bias as much as possible. (ISC)2 has a good code of conduct you may want to refer to, to achieve this goal (see https://www.isc2.org/Ethics).
Sometimes the problem may be related to what you are trying to optimize. In fact, it may be incorrect to try and minimize the Residual Risk without taking in account the cost for the mitigations: perhaps it would be better to talk about the optimization of the cost of the Residual Risk in relationship with the investment required for the mitigations. The picture below shows that it is typically possible to identify some Sweet Spot that represents the best balance between the cost due to the potential losses and the cost for the mitigations.

To be even more correct, we should consider that the while the cost for the mitigations is known, the cost for the Risk is not, therefore we cannot have a curve like in the previous image, but a range, like in the picture below. In this case it is clear that the optimized value is much more blurry, but it may still be determined with reasonable precision.

In any case, something must be clear: it is impossible to make the actual risk zero. Actually, it is not correct: it is possible to void the risk by removing the features causing it. The extreme consequence of this approach is that to maximize security you need to avoid doing anything useful, which is an obvious nonsense.
So, we have to accept some risk. Any solution implies some risk and as we have discussed with previous articles, threat modeling provides a way to identify action that can be done to control the risk.
At this point, the questions we have to ask ourselves, are: how can we determine if the security risk represented by the solution are acceptable and, if not, what mitigations shall we select among the possible alternatives, to make it acceptable?
But wait, there is more to that: in fact, even selecting mitigations is not without risks. It is common knowledge that mitigations like antiviruses have been used in the past as entry points (see https://searchsecurity.techtarget.com/answer/DoubleAgent-malware-could-turn-antivirus-tools-into-attack-vector), and that detective controls have been targeted by malicious actors to retrieve information about the infrastructure and identify its weaknesses, by leveraging vulnerabilities found in the SIEM themselves (see for example https://www.zdnet.com/article/fortinet-removes-ssh-and-database-backdoors-from-its-siem-product/), for which exist readily available scripts that may be executed even by unskilled attackers (see for example https://www.exploit-db.com/exploits/45005).
The bottom line is that picking the right mitigations may be tricky. At the same time, this is how a threat model makes an impact: without selecting and implementing mitigations, the value of the Threat Model would be limited to create awareness about the risks represented or – perhaps more exactly – caused by the solution to the Organization.
Of course, you may have a solution that is intrinsically so secure or which deals with information so unimportant, that the Threat Model cannot identify any major risk for it, but it’s a rare occurrence, that I have yet to see.
Based on those considerations, we may introduce some concepts: first of all, the solution you are analyzing with your Threat Model has a number of security risks, which we typically call Threat Events. With that term we refer to attacks caused by a malicious actor which may cause a loss to the Organization (typically identified as ‘direct risks’) or to external entities like its customers, in which case the Organization itself may face a loss because it is required to compensate those external entities for the damage suffered (and in this case we talk about ‘indirect risks’).
Not all threat are equally important and the same threat may have a different importance for two solutions having different characteristics. An example I usually use is related to a software to control a nuclear plant: if we focus on the feeds coming from the plant’s sensors, for example related to the temperature, it may be easy to see how an attacker who would change the content of those feeds may cause more damage than someone who would simply read them. On the other hand, applications dealing with secret data may have a different profile and as a result their authors may be more concerned about undue disclosure than about the risk of tampering. The bottom line is that you need to calibrate your evaluation on the actual concerns for the organization related to that solution.
There are a couple of approaches to do that: a first one is discussed in the wonderful book “How to measure anything in cybersecurity risks” by Douglas W. Hubbard and Richard Seiersen and is based on the quantification of the risk by using some quantitative risk estimation methodology like FAIR and then on calibration sessions with the stakeholders to understand the amount of acceptable loss.
While this approach is great and something I can definitely see it as the future for Threat Modeling, I think we are not ready yet to adopt Quantitative Risk Analysis as the main tool for evaluating risk and prioritizing mitigations for Threat Models. Next week we will continue our discussion by introducing an alternative approach, which is sub-optimal but still provides enough value to be a sensible choice when you cannot invest too much time and effort for prioritization, or when you simply do not have the skills required to do adopt more rigorous approaches.