Now the reason why we do risk analysis is to be able to quantify in some way the impact of an individual potential threat – threat agent. And there are two types of risk analysis. There is the quantitative, which basically is using a mathematical model where you can assign some numerical or financial value to the risk. And then there's qualitative, which uses a scenario-based model, which is often kind of relative, on a relative scale - four or five different levels of risk. The benefits of doing risk analysis is it gives us a clear cost to value ratio for our security mechanisms and the implementation of our administrative, technical, and physical controls. It justifies hardware costs and software costs to our department, to our superiors – to the CFO for example, to the stakeholders. It helps us focus our cost where they can get the most out of them, where they can do the most good. It'll also help us in the future in making security decisions as well.
Quantitative risk analysis
Okay, let's look at some quantitative analysis or quantitative risk analysis, where we have mathematical formulas involved. Again this is quantitative, so it's based on some math, some calculations. And we're going to look at some different variables here.
So we want to calculate the Asset Value (which is AV), the Exposure Factor (which is EF). So we're going to multiply those, take the product of those. That's going to be our SLE – our Single Loss Expectancy. Then we have the ARO; we're going to multiply the SLE by the Annualized Rate of Occurrence, and that will give us our ALE – the Annualized Loss Expectancy.
ALE = (AV * EF) * ARO,
AV * EF = SLE
So AV is the Asset Value; it's the purchase price – how much it cost to deploy it, how much it cost to maintain that particular asset. It could be a web server, it could be a database server. And these will include the cost of development as well, so it could be hardware plus software. It's not a very easy number to calculate, but quantitative is actually easier than qualitative because at least you can go to the comptroller or the accounting department and get some real numbers, even though you're going to estimate the degree of loss or the degree of depreciation. So the risk assessment team has to make a determination that evaluates every available threat scenario and they make a judgment call. So if you had a flood with a 60% destruction factor, then you want to add to that how much depreciation. You could also consider data entry errors, so an EF of a data entry error may be something like .001%. Now the SLE calculation is the product or it's the expected loss from a single occurrence of the threat, and it's defined as AV times EF or Asset Value times Exposure Factor.
Risk components and scores
Now the definition of risk can vary depending upon your business sector – whether you're public, or private, or a military, or governmental, or corporate. But for our sakes here, the organizational impact of threat vectors and the exploiting vulnerabilities of the assets that you're protecting is how we define our risk. So in the early phases of the lifecycle we're doing information gathering, we want to know the assets and their values as close as we can, okay – discrete numerical values for the EF, based on the impact of what happens if you lose that asset, how much will it cost to replace it. You can also base these on data classification methods, like confidential, secret, top-secret, and so on, where top-secret have a higher value than confidential. That's more of a qualitative analysis though. Take a look at the following table:
|Low Value||Limited effect||Limited effect||Limited effect|
|Moderate Value||Serious effect||Serious effect||Serious effect|
|High Value||Severe effect||Severe effect||Severe effect|
You also want to include the impact on your organization, like replacement cost, liability, and issues like that. There is also vulnerabilities. There are several scanners and commercial tools out there, like Nessus, that you can use. There are databases – like the CVE, the NVD, and organizations – like NIST, that can help you know about the most recent vulnerabilities out there.
And know about threats and their impact, or the probability of occurrence based on your organization. So for example, if you're using Apache web servers, then the probability of occurrence of an IIS attack are going to be a lot less. You can get this information from readily available databases, like the mitre Common Attack Pattern Enumeration and Classification (CAPEC).
|Definition||Inability to achieve minimum requirements||Major cost and schedule increases||Moderate cost and schedule increases||Small cost and schedule increases||No effect|
Once you've gone through the process of classifying your assets, your vulnerabilities, your threat agents, you're going to get some risk scores by applying formulas of quantitative risk analysis.
The goal here is to calculate a risk matrix, which has risk scores for assets and groups of assets. And in a best case scenario, an organization risk score that can be used in your security monitoring process, in your incident response, your written security policy, and policy reviews. These risk scores will offer up an idea of kind of the overall scope of assets, threats, vulnerabilities, and countermeasures, which are all the components of risk at a particular point in time – maybe on a monthly basis, maybe on a quarterly basis, maybe on a semi-annual basis.
Life-cycle approach to risk management
Okay, managing risk is very complex, so one thing they can help you is they have a lifecycle approach, and here is one from NIST in 2011.
Notice that the center of this diagram, here we have the frame or framing, that's our scope, okay. We're establishing the context for our risk-based decisions; that would be assets, vulnerabilities, threat – are you under compliance or governance, what is the overall business governance landscape, so you want to frame it. Then you want to make sure that this process is going to involve all the stakeholders, okay – senior leaders and executives, mid-level leaders who plan, execute, and manage projects, and of course, the individuals that are going to support this initiative. Once you frame the risk for all these stakeholders, you need to assess the risk. You'll respond to risk once you've determined that it's existing, and then you'll monitor it on an ongoing basis using operational organizational communications and a feedback loop along with possibly workflow processes and other documentation to provide continuous improvement. That's always the goal here, CI (Continuous Improvement) – whether it's an ITIL implementation, whether it's a risk management lifecycle approach, we're always shooting for continual improvement. And risk management is going to be performed as a holistic organization wide activity; that's going to go from the strategic level all the way down to the tactical level. This'll make sure that risk-based decision-making is integrated into every functional component of our organization – either public or private.
There are definitely some trends that are out there in regulatory compliance. For the last couple of decades, a lot of jurisdictions have been pretty weak worldwide in enforcing their existing regulations. But we're seeing in the United States for example, kind of a carrot and stick approach where we're offering some incentives, especially let's say in healthcare, privacy, and security for examples. But we're seeing some tightening enforcement, expanded powers by the US government and the executive branch, higher penalties, harsher enforcement actions.
Obviously because of technology, it'll be much more difficult to hide information security failings in the future. There will be more and more regulations, so there is a trend towards increasingly prescriptive rules. In other words, going beyond the state to the federal level. Regulators are also making it clear that organizations are responsible for ensuring the protection of their data when it's being processed by a vendor, a partner, and even cloud service providers. This could help focus management's attention more on security, but we need to move away from just a checklist approach. We need to look at our ability to manage risk and improve security, not just check off a checklist. The new compliance landscape will increase costs; it will also increase risks – especially for service providers – and more third-party risk. If there is more transparency, there are now greater consequences for data breaches. Coming down the pike – probably more litigation as customers and business partners seek compensation when their data is compromised specifically in the cloud.
Here is a nice table that shows a lot of the compliance regulations. You can see the regulation on the left, some of the ones that may be pop up. Sarbanes-Oxley, which is for publicly traded companies in the US. There is PCI DSS, which is global, credit card processing. HIPAA, which is US healthcare providers and their partners.
|Regulation||Geographic Scope||Applies To:|
|EU Directive 95/46/EC||European Union||All organisations operating in the 27 member countries|
|Sarbanes-Oxley||US||All publicly traded companies in the US (exemption for smaller reporting companies)|
|PIPEDA||Canada||All organisations in Canada|
|PCI DSS||Global||All organisations processing credit card data|
|HIPAA||US||All healthcare organisations in the US|
|FISMA||US||Federal agencies and service organisations|
|Basel II||Global||All internationally-active banks with assets of US$250 billion or more|
|DMCA||US||Individuals and organisations|
|NERC||North America||Users, owners, and operators of the bulk electric power system|
|GLBA||US||All financial institutions in the US|
|Safe Harbor Act||European Union||US companies doing business in the E.U.|
It's globalization, it's happening, and everything is under regulatory compliance. But not just looking for ways to strengthen the existing laws, they want to introduce new laws that force more transparency and then affect global organizations. So, basically the landmark legislation SB-1386 in California that kind of started this whole thing back in 2003.