It Is Too Hard Or Impossible…

July 15, 2014

** Admitting that you don’t know how to make the sausage will always cast doubt on the quality of the sausage you do produce. **

One of my personal risk management aggravations relates to risk management professionals that claim it is too hard or impossible to quantify the frequency or severity of loss. First, there is the irony that we operate in a problem space of uncertainty and then make absolute statements that something cannot be done. When I witness this type of uttering, I will often challenge the person on the spot – keeping in mind the audience – in an effort to pull that person off the edge of mental failure. And make no mistake, I struggle with quantification as well – but to what degree I share that with stakeholders or peers is an aspect of professional perception that I intentionally manage. Reflecting on my own experiences and interactions with others, I want to share some quick litmus tests I use when addressing the “it is too hard or impossible” challenges.

1. Problem Scoping. Have I scoped the problem or challenge too broadly? Sometimes we take these super-big, gnarly problem spaces and become fascinated with them without trying to deconstruct the problem into more manageable chunks. Often, once you begin reducing your scope, the variables that drive frequency or severity will emerge.

2. Subject Matter Experts. This is one litmus test that I have to attribute to Jack Jones and the FAIR methodology. Often, we are not the best person to be making subject matter estimates for the variables that factor into the overall risk. The closer you get to the true experts and extract their knowledge for your analysis, the more robust and meaningful your analysis will become. In addition, leveraging subject matter experts fosters collaboration and in some cases innovation where leaders of unrelated value chains realize there is opportunity to reduce risk across one or more chains.

3. Extremes and Calibration. Once again, I have Jack Jones to thank for this litmus test and Doug Hubbard as well. Recently, a co-worker declared something was impossible to measure (workforce, increased expense related). After his “too hard” declaration, I simply asked: “Will it cost us more than $1BN?” The question stunned my co-worker, which resulted in a “Of course not!” to which I replied “It looks like it is greater than zero and less than 1 billion, we are making progress!” Here is the point, we can tease extremes and leverage calibration techniques to narrow down our uncertainty and define variable ranges versus anchoring in on a single, discreet value.

4. Am I Trying Hard Enough. This is a no-brainer but unfortunately I feel too many of us do not try hard enough. A simple phone call, email or even well crafted Google query can quickly provide useful information that in turn reduces our uncertainty.

These are just a few “litmus tests” you can use to evaluate if an estimation or scenario is too hard to quantify. But here is the deal, as risk professionals it is expected that we deal with tough things so our decision makers don’t have too.


Knowing Your Exposure Limitations

April 1, 2013

For those familiar with risk analysis – specifically the measurement [or estimate] of frequency and severity of loss – there are many scenarios where the severity of loss or resultant expected loss can have a long tail. In FAIR terms, we may have a scenario where the minimum loss is $1K, the most likely loss $10K and the maximum loss is $2M. In this type of a scenario – using Monte Carlo simulation and depending on the associated kurtosis of the estimated distribution – it is very easy for a severity distribution or aggregate distribution (that takes into account both frequency and severity) to be derived of which the resulting descriptive statistics don’t as accurately reflect the reality of exposure of the adverse event should it occur. While understanding the full range of severity or overall expected loss may be useful, a prudent risk practitioner should understand and account for the details of the organization’s business insurance policies to better understand when insurance controls will be invoked to limit financial loss for significant adverse events.

Using the example values above, an organization may be will willing to pay out of pocket for all adverse events –similar to the scenario above – up to $1M and then rely upon insurance to cover the rest. This in turn changes the maximum amount of loss the company is directly exposed to (per event); from $2M to $1M. In addition, this understanding could be a significant information point for decisions makers as they ponder how to treat an issue. Given this information consider the following:

1. How familiar you are with your organizations business or corporate insurance program?

2. Does your business insurance program cover the exposures you are responsible for or accountable for managing the risk of?

3. Are your risk analysis models flexible enough to incorporate limits of loss relative to potential loss?

4. When you talk with decisions makers, are you even referencing the existence of business insurance policies or other risk financing / transfer controls that limit your organization’s exposure when significant adverse events occur?

The more we can leverage other risk-related controls in the organization and paint a more accurate picture of exposure, the more we become a trusted advisor to our decision makers and other stakeholders in the larger risk management life-cycle.

Want to learn more?

AICPCU – Associate in Risk Finance (ARM) – http://www.aicpcu.org/comet/programs/arm/arm.htm

SIRA – Society of Information Risk Analysts – http://www.societyinforisk.org


Wonder Twin Powers Activate…

March 7, 2013

…form of risk professional. I really miss blogging. The last year or so has been a complete gaggle from a relocation and time-management perspective. So naturally, discretionary activities – like blogging – take a back seat. I want to share a few quick thoughts around the topic of transitioning from a pure information technology / information security mindset to a risk management professional mindset.

1. Embrace the Gray Space. Information technology is all about bits, bytes, ones and zeros. Things either work or don’t work; it is either black or white, it is either good or bad – you get the point. In the discipline of risk management we are interested in everything between the two extremes. It is within this space where there is information to allow decision makers to make more well informed decisions.

2. Embrace Uncertainty. Intuitively, the concept of uncertainty is contrary to a lot of information technology concepts. Foundational risk concepts revolve around understanding and managing uncertainty and infusing it into our analysis / conversation with decision makers. There is no reason why this cannot be done within information risk management programs as well.  At first, it may feel awkward as an IT professional to admit to a leader that there is uncertainty inherent within some of the variables included in your analysis. However, what you will find – assuming you can clearly articulate your analysis – is that infusing the topic of uncertainty in your conversations and analysis has indirect benefits. Such an approach implies rigor, maturity and builds confidence with the decision maker.

3. Find New Friends. Notice I did not type find different friends. There is an old adage that goes something to the effect of “you are who you surround yourself with”. Let me change this up: “you are who you are learning from”. You want to learn risk management? Indulge yourself in non-IT risk management knowledge sources, learn centuries old principles of risk management and then begin applying what you have learned to the information technology / information security problem space. Here are just a few places to begin:

a. https://www.societyinforisk.org/
b. Risk Management Magazine
c. The Risk Management Society
d. Property & Casualty  – Enterprise Risk Management

4. Change Your Thinking. This is going to sound heretical but bear with me. Stop thinking like an IT professional and begin thinking like a business and a risk management professional. Identify and follow the money trails for the various risk management problem spaces you are dealing with. Think like a commercial insurer. An entire industry exists to reduce the uncertainty associated with technology-related, operational risk – when bad things happen. Thus, learn how commercial insurers think so you can manage risk more effectively without having to overspend on third party risk financing products – as well as manage risk in such a way that can tie back to the financials – feelings and emotions. This is why I am so on-board with the AICPCU’s Associate in Risk Management (ARM) professional designation. You can also check out the FAIR risk measurement methodology which is also very useful for associating loss forms to adverse events which can also help tell the story around financial consequences.

5. Don’t Die On That Hill. I have to thank my new boss for this advice. Choose your risk management battles wisely and in the heat of the conversation ask yourself if you need to die on this hill. Not all of our conversations with decision makers, leaders or even between ourselves – as dear colleagues – is easy. It is way too easy for passion to get in the way of progress and influencing. Often, if you find yourself “on the hill” asking if you need to die – something has gone terribly wrong. Instead of dying and ruining a long term relationship – take a few steps back, get more information that will help in the situation, regroup and attack again. This is an example of being a quiet professional.

That is all for now. Take care.


The AICPCU ‘Associate in Risk Management’ (ARM)

September 14, 2012

A year or so ago I stumbled upon the ARM designation that is administered through the AICPCU or ‘the Institutes’ for short. What attracted me then to the designation was that it appeared to be a comprehensive approach to performing a risk assessment for scenarios that result in some form of business liability. Unfortunately, I did not start pursuing the designation until July 2012. The base designation consists of passing three tests on the topics of ‘risk assessment’, ‘risk control’ and ‘risk financing’. In addition, there are a few other tests which allows one to extend their designation to include disciplines such as ‘risk management for public entities’ and ‘enterprise risk management’.

I am about two months into my ARM journey and just passed the ARM-54 ‘Risk Assessment’ test. I wanted to share some perspective on the curriculum itself and some differentiators when compared to some other ‘risk assessment’ and ‘risk analysis / risk measurement’ frameworks.

1. Proven Approach. Insurance and risk management practices have been around for centuries. Insurance carriers especially those who write commercial insurance products are very skilled at identifying and understanding the various loss exposures businesses face. Within the information risk management and operational risk management space, many of the loss exposures we care about and look for are the same that insurance carriers may look for when they assess a business for business risk and hazard risk; so they can create a business insurance policy. In other words, the ‘so what’ associated with the bad things we and insurance carriers care about is essentially a business liability that we want to manage. Our problem space / skills and risk treatment options may be slightly different but the goal of our efforts is the same: risk management.

2. Comprehensive. The ARM-54 course alone covers an enormous amount of information. The material easily encompasses the high level learning objectives of six college undergraduate courses I have taken in the last few years:

- Insurance and Risk Management
– Commercial Insurance
– Statistics
– Business Law
– Calculus (Business / Finance Problem Analysis / Calculations)
– Business Finance

The test for ARM-54 was no walk in the park. Even though I passed on the first attempt, I short-changed myself on some of the objectives which caused a little bit of panic on my part. The questions were well written and quite a few of them forced you to understand problem context so you could choose the best answer.

3. ‘Risk Management Value Chain’. Some of the following thoughts are the largest selling points of this designation compared to other IT risk assessment frameworks, IT risk analysis frameworks and IT risk certifications / designations. The ARM curriculum connects the dots between risk assessment activities, risk management decisions and the financial implications of those decisions at various levels of abstraction. This is where existing IT-centric risk assessment / analysis frameworks fall short – they are either to narrow in focus, do not incorporate business context, are not practical to execute or in some cases, not useful at all in helping someone or a business manage risk.

4. Cost Effective. For between $300-$500 per ARM course – one can get some amazing reference material and pay for the test. Compare that to the cost of six university courses (between $6K – $9K) or the cost of one formal risk measurement course (~$1k). I am convinced that any risk management professional can begin applying learned concepts from the ARM-54 text within hours after having been introduced to the text. So just the cost of the text books alone (~$100 give or take) is justified even if you do not take the test(s).

5. Learn How To Fish. Finally, I think it is worth noting that there is nothing proprietary to the objectives and concepts presented in the ARM-54 ‘Risk Assessment’ curriculum. Any statistical probability calculations or mathematical finance problems are exactly that – good ole math and probability calculations. In addition, there is nothing proprietary about the methods or definitions presented as they relate to risk assessments or risk management proper. This is an important selling point to me because there are many information risk management practitioners that are begging for curricula or training such as ARM where they can begin applying what they are learning and not be dependent on proprietary tools, proprietary calculations or pay for the license to use a proprietary framework.

In closing, the ARM-54 curriculum is a very comprehensive risk management curriculum that establishes business context, introduces proven risk assessment methods, and reinforces sound risk management principals. In my opinion, it is very practical for the information / operational risk management professional – especially those that are new to risk management or looking for a non-IT or non-security biased approach to risk management – regardless of the industry you work in.

So there you have it. I am really psyched about this designation and the benefits I am already realizing in my job as a Sr. Risk Advisor for a Fortune 200 financial services firm. I wish I would have pursued this designation two years ago but I am optimistic that I will make for lost time and tangible business value very quickly.


Assurance vs. Risk Management

August 29, 2012

One of my current hot button is the over-emphasis of assurance with regards to risk management. I recently was given visibility to a risk management framework where ‘management assurance’ was listed as the goal of the framework. However, the framework did not allow for management to actually manage risk.

Recently at BSidesLA I attempted to reduce the definitions of risk and ‘risk management’ down to fundamental attributes because there are so many different – and in a lot of cases contextually valid – definitions of risk.

Risk: Something that can happen that can result in loss. It is about the frequency of events that can have an adverse impact to our time, resources and of course our money.

Risk Management: Activities that allow us to reduce our uncertainty about risk(s) so we can make good trade off decisions.

So how does this tie into assurance? The shortcoming with an assurance-centric approach to risk management is that assurance IMPLIES 100% certainty that all risks are known and that all identified controls are comprehensive and effective. An assurance-centric approach also implies that a control gap, control failure or some other issue HAS to be mitigated so management can have FULL assurance regarding their risk management posture.

Where risk management comes into play is when management does not require with having 100% assurance because there may not be adequate benefit to their span of control or the organization proper. Thus, robust risk management frameworks need to have a management response process – i.e. risk treatment decisions – when issues or gaps are identified. A management response and risk treatment decision process has a few benefits:

1. It promotes transparency and accountability of management’s decisions regarding their risk management mindset (tolerance, appetite, etc.).

2. It empowers management to make the best business decision (think trade-off) given the information (containing elements of uncertainty) provided to them.

3. It potentially allows organizations to better understand the ‘total cost of risk’ (TCoR) relative to other operational costs associated with the business.

So here are the take-aways:

1. Assurance does always not equate to effective risk management.

2. Effective risk management can facilitate levels of assurance, confidence as well one’s understanding of uncertainty regarding loss exposures they are faced with.

3. Empowering and enabling management to make effective risk treatment decisions can provide management a level of assurance that they are running their business they way they deem fit.


Heat Map Love – R Style

January 20, 2012

Over the last several years not a month has gone by where I have not heard someone mention R – with regards to risk analysis or risk modeling – either in discussion or on a mailing list. If you do not know what R is, take a few minutes to read about it at the project’s main site. Simply put, R is a free software environment for statistical computing and graphics. Most of my quantitative modeling and analysis has been strictly Excel-based, which to date has been more then sufficient for my needs. However, Excel is not the ‘end-all-be-all’ tool. Excel does not contain every statistical distribution that risk practitioners may need to work with, there is no native Monte Carlo engine and it does have graphing limitations short of purchasing third party add-ons (advanced charts, granular configuration of graphs, etc…).

Thanks to some industry peer prodding (Jay Jacobs of Verizon’s Risk Intelligence team and Alex Hutton suggesting that ‘Numbers’ is a superior tool for visualizations). I finally bit the bullet, downloaded and then installed R.  For those completely new to R you have to realize that R is a platform to build amazing things upon. It is very command-line like in nature. You type in instructions and it executes. I like this approach because you are forced to learn the R language and syntax. Thus, in the end you will probably understand your data and resulting analysis much better.

One of the first graphics I wanted to explore with R was heat maps. At first, as I was thinking a standard risk management heat map; a 5×5 matrix with issues plotted on the matrix relative to frequency and magnitude. However, when I started searching Google for ‘R heat map’, a similar yet different style of heat map – referred to as a cluster heat map – was first returned in the search results. A cluster heat map is useful for comparing data elements in a matrix against each other depending on how your data is laid out. It is very visual in nature and allows the reader to quickly zero in on data elements or visual information of importance. From an information risk management perspective, if we have quantitative risk information and some metadata, we can begin a discussion with management by leveraging a heat map visualization. If additional information is needed as to why there are dark areas, then we can have the discussion about the underlying quantitative data. Thus, I decided to build a cluster heat map in R.

I referenced three blogs to guide my efforts – they can be found here, here and here. What I am writing here is in no way a complete copy and paste of their content because I provide some additional details on some steps that generated errors for me that in some cases took hours to figure out. This is not unexpected given the difference in data sets.

Let’s do it.

1.    Download and install R. After installation, start an R session. The version of R used for this post is 2.14.0. You can check your version by typing version at the command prompt and pressing ENTER.

2.    You will need to download and install the ggplot2 package / library. Do this through the R gui by referencing an online CRAN repository (packages -> install packages …). This method seems to be cleaner then downloading a package to your hard disk and then telling R to install it. In addition, if you reference an online repository, it will also grab any dependent packages at the same time. You can learn more about ggplot2 here.

3.    Once you have installed the ggplot2 package, we have to load it into our current R workspace.

> library(ggplot2)

4.    Next, we are going to import data to work with in R. Download ‘risktical_csv1.csv’ to your hard disk and execute the following command. Change the file path to match the file path for where you saved the file to.

risk <- read.csv(“C:/temph/risktical_csv1.csv”, sep=”,”, check.names= FALSE)

a.    We are telling R to import a Comma Separated Value file and assign it to a variable called ‘risk’.
b.    Read.csv is the method or function type of import.
c.    Notice that the slashes in the file name are opposite of what they normally would be when working with other common Windows-based applications.
d.    ‘sep=”,”’ tells R what character is used to separate values within the data set.
e.    ‘check.names=FALSE’ tells R not to check the column headers for correctness. R expects to see only letters, if it sees numbers, it will prepend an X to the column headers – we don’t want that based off the data set we are using.
f.    Once you hit enter, you can type ‘risk’ and hit enter again. The data from the file will be displayed on the screen.

5.    Now we need to ‘shape’ the data. The ggplot graphing function we want to use cannot consume the data as it currently is, so we are going to reformat the data first. The ‘melt’ function helps us accomplish this.

risk.m <- melt(risk)

a.    We are telling R to use the melt function against the ‘risk’ variable. Then we are going to take the output from melt and create a new variable called risk.m.
b.    Melt rearranges the data elements. Type ‘help(melt)’ for more information.
c.    After you hit enter, you can type ‘risk.m’ and hit enter again. Notice the way the data is displayed compared to the data prior to ‘melting’ (variable ‘risk’).

6.    Next, we have to rescale our numerical values so we can know how to shade any given section of our heat map. The higher the numerical value within a series of data, the darker the color or shade that tile of the heat map should be. The ‘ddply’ function helps us accomplish the rescaling; type ‘help(ddply)’ for more information.

risk.m <- ddply(risk.m, .(variable), transform, rescale = rescale(value), reorder=FALSE)

a.    We are telling R to execute the ‘ddply’ function against the risk.m variable.
b.    We are also passing some arguments to ‘ddply’ telling it to transform and reshape the numerical values. The result of this command produces a new column of values between 0 and 1.
c.    Finally, we pass an argument to ‘ddply’ not to reorder any rows.
d.    After you hit enter, you can type ‘risk.m’ and hit enter again and observe changes to the data elements; there should be two new columns of data.

7.    We are now ready to plot our heat map.

(p <- ggplot(risk.m, aes(variable, BU.Name)) + geom_tile(aes(fill = rescale), colour = “grey20″) + scale_fill_gradient(low = “white”, high = “red”))

a.    This command will produce a very crude looking heat map plot.
b.    The plot itself is assigned to a variable called p
c.    ‘scale_fill_gradient’ is the argument that associates color shading to the numerical values we rescaled in step 6. The higher the rescaling value – the darker the shading.
d.    The ‘aes’ function of ggplot is related to aesthetics. You can type in ‘help(aes)’ to learn about the various ‘aes’ arguments.

8.    Before we tidy up the plot, let’s set a variable that we will use in formatting axis values in step 9.

base_size <- 9

9.    Now we are going to tidy up the plot. There is a lot going on here.

p + theme_grey(base_size = base_size) + labs(x = “”, y = “”) + scale_x_discrete(expand = c(0, 0)) + scale_y_discrete(expand = c(0, 0)) + opts(legend.position = “none”, axis.ticks = theme_blank(), axis.text.x = theme_text(size = base_size * 0.8, angle = -90, hjust = 0, colour = “black”), axis.text.y = theme_text(size = base_size * 0.8, angle = 0, hjust = 0, colour = “black”))

a.    ‘labs(x = “”, y = “”)’ removes the axis labels.
b.    ‘opts(legend.position = “none”’ gets rid of the scaling legend.
c.    ‘axis.text.x = theme_text(size = base_size * 0.8, angle = -90’ sets the X axis text size as well as orientation.
d.    The heat map should look like the image below.

A few final notes:

1.    The color shading is performed within series of data, vertically. Thus, in the heat map we have generated, the color for any given tile is relative to the tile above and below it –IN THE SAME COLUMN – or in our case for a given ISO 2700X policy section.

2.    If we transposed our original data set – risktical_cvs2 – and applied the same commands with the exception of replacing BU.Name with Policy in our initial ggplot command (step 7), you should get a heat map that looks like the one below.

3.    In this heat map, we can quickly determine key areas of exposure for all 36 of our fictional business units relative to ISO 2700X. For example, most of BU3’s exposure is related to Compliance, followed by Organizational Security Policy and Access Control. If the executive in that business unit wanted more granular information in terms of dollar value exposure, we could share that information with them.

So there you have it! A quick R tutorial on developing a cluster heat map for information risk management purposes. I look forward to learning more about R and leveraging it to analyze and visualize data in unique and thought-provoking ways. As always, feel free to leave comments!


Personal Risk Management

November 4, 2011

Somewhere between self-improvement, the feedback process, perception management and total quality management (TQM) is a lesson to be learned and an opportunity for introspection. I want [need] to document a few thoughts about the intersection of these concepts based off recent personal and professional experiences.

Self-Improvement. At some point while serving in the Marine Corps it became very obvious that there were three performance paths: be a bad performer and let the system make your life a living heck, be an average performer and let the system carry you along, be a stellar performer and push the system to its limits and possibly change it. I have always chosen to chase after stellar and it has worked pretty well for me over the years. However, in some professions to maintain stellar status – you have to constantly be seeking self-improvement.

Feedback. The term feedback means different things depending on the context in how it is being used. I find the act of feedback to be challenging both on the giving end as well as the receiving end – especially when it is feedback that is not complimentary. I have had both great and absolutely horrendous experiences – as an actor in both roles. The reality is that having feedback mechanisms in place whether formal or informal is critical to have – regardless of the merit of the feedback or how the feedback was communicated. More on this later when I attempt to tie all of this together.

Perception Management. Perception is reality to most people regardless of the facts. Anyone that is actively managing their career or personal life probably cares about perception. Furthermore, we probably want to be in control of how people perceive our actions, thoughts, attitudes and even mannerisms – lest it be established by others.

Total Quality Management. My current school studies are revolving around operations management. Specifically, quality improvement, TQM, Six Sigma, etc. There are concepts around TQM that can be applied to various dimensions of our lives: personal, professional, ethical, moral, giving, etc. Without going down a rabbit hole, I am convinced that quality improvement concepts allow us to construct guard rails (control limits) for the aforementioned dimensions.

So how does all of this tie together?

If you are serious about self-improvement and managing perception – you have to embrace feedback and take into consideration if you are approaching a quality limit if a feedback opportunity presents itself (me being the recipient). You may not agree with the merit of the feedback or agree with the delivery mechanism but you have to listen – just not hear – what is being communicated. This is really hard to do sometimes and how we react to the feedback experience can destroy relationships and further erode trust. When it comes to constructive criticism feedback – if someone is taking the time to give it – regardless of its validity – could this possibly be an indicator that we are approaching some of our quality limits – whether you have defined them or not?

For example, here are two commonly used rules for determining is a process is out of control:
1.    A single point outside the control limits.
2.    Obvious consistent or persistent patterns that suggest that there is something unusual about the data.

Keeping these two rules in mind, we can go through this exercise of introspection. Such an exercise requires one to put their pride on the shelf, set aside emotions, and really try to flush out the opportunity for self improvement. And, if all this can be done in a manner with a redemptive mindset – the better yet. In the end of such an exercise, there should always be one or more questions we should strive to answer:

1.    Is there something minor I can improve on? Is a slight adjustment needed to pull me back from the guard rails or better manage perception?
2.    Is there something major going on that calls for a massive adjustment? Is there really a fire that is producing all this feedback smoke?
3.    Was I a good partner in the feedback process? Did I listen? Did I have a redemptive mindset?

Hear me folks – this topic and what I have outlined is not something I consider myself to be a stellar example of. However, I do care about self-improvement, managing my perception, and adhering to quality in the execution of my responsibilities and will strive to keep in mind what I have outlined moving forward.

That’s it.


Follow

Get every new post delivered to your Inbox.