Most projects clearly define their objectives, work scope, budget, and schedule but, all too often, the environment and context in which the project exists is neither fully understood nor clearly defined. This is a major source of risk when it comes to project management and execution.
While having a clearly defined set of objectives, work
scope, budget, and schedule is essential to being able to plan, implement, and
control a project, if the project management team do not fully understand their
project environment and context, the project will, in all likelihood, be doomed
to failure. This is because project environment and context drives performance as
much as, if not more than, a clearly defined work scope, budget, or schedule.
No two projects are the same, even if their objectives and
work scope are. This is because even projects with identical objectives and
work scopes will inevitably be executed in different environments. As such, the
environmental factors are often the things that determine the success or
failure of a project.
When evaluating the environmental and contextual shaping
factors that differentiate one project from the next, it is important to consider
the following:
Where is the geographical location of the project? – This will help identify potential project execution constraints and risk sources, such as:
Local weather/climate extremes
Geo-technical and topographical issues
Site access constraints
Utilities and local service availability
Environmental sensitivities
Human and material resource availability
What is the political environment in which the project exists? – This will dictate how the project management team may need to engage with their stakeholders, such as:
Dealing with bribery and corruption issues
Managing differences between local and national policies
Adapting to sudden changes in political power or influence
Resolving conflicts between differing political factions
How will local regulatory and legal requirements affect the project? – This may impact project execution performance by placing conditions on certain parameters, such as:
Prioritisation of standards
Local content requirements
Adherence to site-specific and local environmental regulations
Adherence to corporate Codes of Conduct
Restrictions on human and material resource availability
What is the cultural and religious environment in which the project exists? – This will dictate how the project management team may need to adjust the project execution plan, by taking into account:
Personnel accommodation and work facilities
Local holidays and acceptable working hours
Restricted or protected areas
Security considerations and requirements
What technological tools, skills, and experience are available to the project? – Availability of technological know-how can affect the both the objectives and values of a project through its impact on:
Design complexity
Human resource availability
Speed and efficiency in project execution
Adherence to scope and standards
Reliability and operability of the end-product
Safety in project execution and operation
What are the market conditions in which the project exists? – Market conditions are always dynamic and, depending on the overriding economic environment, can be either beneficial or detrimental to the performance of a project. Varying market conditions may affect a project in the following ways:
Ability to finance the project
Human and material resource availability
Changes to project scope and/or standards
Who are the project stakeholders, what influence do they have, and who is controlling the project? – A project is inevitably affected by the influences exerted on it by its controlling organisation and other stakeholders. The extent of this influence is generally determined by the following stakeholder factors:
Experience
Culture
Style
Structure
Maturity
Risk Attitude
Interests and Priorities
These are just a few of the environmental
and contextual shaping factors that need to be considered when developing a
project management or execution plan. The importance of fully understanding
project environment and context should never be underestimated, as this can help
prevent even the most technically well-defined projects from falling into
disarray.
SMART risk response planning is an essential aspect of project risk management. Most of us will have come across the S.M.A.R.T acronym at some point during our working careers. More often than not, it is used for setting goals and objectives in career development plans and other personal performance measurement tasks.
S.M.A.R.T stands for:
Specific
Measurable
Achievable
Realistic
Time-Bound
I have also seen the “R” in S.M.A.R.T stand for “Relevant”, but I prefer to use it as meaning “Realistic”, as I consider the “S” in “Specific” to include relevance.
When it comes to managing risks, using the S.M.A.R.T principle is an especially useful technique to help ensure the effectiveness of proposed risk response plans. It is also a sound basis for developing effective project risk management software solutions and is integral to helping ensure that any application that applies the S.M.A.R.T rule is an effective risk management tool (see our previous blog post: Project Risk Management Software – Does it actually help?) rather than just being a risk identification and status reporting tool.
Here, the five components that make up the S.M.A.R.T rule may be applied in risk response planning as follows:
Specific: Ensure the response plan is specific and relevant to the nature and severity of the risk.
Measurable: Ensure the effectiveness of the response plan can be measured in such a way as to be able to accurately revise the ranking and status of the risk after the response plan has been implemented.
Achievable: Ensure the response plan is achievable in the sense that it can be successfully implemented and is not beyond the means of the project budget or resources.
Realistic: Ensure the response plan is realistic in the sense that its results can either be guaranteed or, at the very least, it stands a good chance of succeeding.
Time-Bound: Ensure there is a definitive date identified by when the response plan needs to be implemented in order for it to be successful.
To put the SMART risk response planning principal into context, let us consider the following example:
Due to an upswing in the economy and project related activities, there is a risk that the project team may lose some of its key personnel to other projects or organisations. As an attempt to mitigate the probability of occurrence of this risk event, the project manager proposes to hold a corporate golf weekend away in the Bahamas, midway through the project, to boost team spirit and encourage continuity in project personnel.
Is this risk response plan S.M.A.R.T?
Firstly, is it specific? No, as it does not directly address either the source of the risk or its potential impacts, apart from trying to create a general harmony in the project team members, irrespective of whether they are at risk of leaving the project or not.
Is it measurable? Not at all. Even if the key personnel in question were all golf addicts yearning for a weekend away, it is not possible to measure the impact this may have on their loyalties to remain with the project.
Is it achievable? Well, that really depends on the project budget, location, and schedule. If the project is being run in, or very near to, the Bahamas and there is a dedicated recreation officer assigned to the project, who can arrange such events at a time that is neither disruptive to the project nor over-booked at the golf resort, then perhaps. But, in any other circumstances, this does not seem like an achievable plan.
Is it realistic? Not unless the project manager happens to know that every key member of the project team is a golf addict, who would value a weekend away in the Bahamas more than an offer from a competing project or organisation.
Is it time-bound? No, it is not. Even though the project manager proposed a specific time and duration for the event, this does not align with the timing of the risk event. If market conditions are favourable, the project could lose team members at any time before, or after, this response plan has been implemented.
Consider instead a response plan that proposes:
A tailored incentive scheme to be implemented at the start of the project, to encourage all key personnel to remain on the project until their scope is complete.
Assigning competent deputies to all key personnel within four weeks of project commencement, who are able to take over their duties with minimal disruption if the situation requires.
This is a specific two-pronged response plan that directly addresses both the risk source and impact. It is measurable in that it involves direct engagement with the personnel-at-risk, to establish a quantifiable mitigation cost and level of risk assurance. It is achievable on condition that the project has set aside sufficient contingency, and/or can justify the additional cost against project benefit. It is realistic in that the proposed response plan addresses the risk in a manner that effectively minimises the threat and reduces the impact. Finally, it is time-bound in that the timing for implementation of both mitigation measures is such that the threat will be mitigated before the risk event occurs, and the impact will be mitigated before any personnel who have resigned work off their 4-week notice period.
By ensuring all risk response plans are S.M.A.R.T, the effectiveness of these plans can be continuously analysed and measured. If any particular mitigation measure or response plan does not achieve its objective, each of its S.M.A.R.T components can be assessed, revised, removed or replaced, depending on where the problem lies.
For more information about our project risk management services and software, or if you just want to express your own views on the subject, please feel free to get in touch via our “Contact Us” page.
Having been involved in Project Management (and hence Project Risk Management by association) for more than 25 years, I have encountered a fair amount of scepticism when it comes to considering the benefits of Project Risk Management software. The underlying question being, “Does it actually help?”.
The short answer is, “That depends”.
The longer answer is that it depends on several factors, including:
Whether the software you are using is designed for the type of risk management approach you are taking.
Whether you are looking for the software to provide you with Qualitative Risk Analysis, Quantitative Risk Analysis, or both.
Whether the software you are using is an Enterprise Risk Management (ERM) application with a built-in Project Risk Management module, or is a dedicated Project Risk Management application.
Whether the software actually does what it purports to do. That is, if it is marketed as a “Project Risk Management” application, then does it actually address the “Management of Project Risks”, or is it just designed to be a risk identification and status reporting tool.
Whether your project risk management team are fully trained in the use of, and committed to applying, the software and all its capabilities.
The bottom line is, a Project Risk Management application will only help manage project risks if it actually does what you want it to do, and the people using the application are both committed and knowledgeable in the use of the software.
I have worked with numerous companies in the past who have spent hundreds of thousands of dollars investing in Enterprise Risk Management systems, expecting these systems to be able to manage every conceivable risk that the company is exposed to. They subsequently discover that the system either provides limited effective “Risk Management” capabilities, or that the users find the application to be too unwieldy to be applied practically or efficiently to the risks they are dealing with. ERM systems can be very useful and powerful tools for experienced risk analysts and corporate risk management teams but, all too often, they fall short of the mark when it comes to the practical management of the day-to-day risks which every project is exposed to.
Many people have told me that a Project Risk Management application is only helpful if it can quantitatively analyse risks with a reliable degree of accuracy to enable effective decisions to be made. While this may certainly be a critical attribute in the management of risks where decision making relies on the ability of a system to accurately predict both the probability of risk event occurrence and the impacts of the risk, it is not a critical requirement in the management of most of the day-to-day risks encountered when managing a project.
Project risks vary widely, and the first step in any risk analysis process is to analyse the risk qualitatively (see our previous blog post titled: “Qualitative vs. Quantitative Risk Analysis: What’s the difference?”). This qualitative risk analysis will determine whether there is enough information known about the risk to effectively manage it without undergoing any further quantitative analysis. In most projects, the types of risks that require quantitative risk analysis are well known in advance. These will include risks associated with developing cost & schedule estimates, as well as hazardous operation related risks. As such, a project team will normally apply specialist resources, using specialist tools, to quantitatively analyse these types of risks as a mandatory process in the execution phase of a project. However, the type of quantitative risk analysis tool used still needs to be appropriate to the type of risk being analysed. Applying Monte Carlo analysis (see our previous blog post titled: “Monte Carlo Simulation: How does it work?”) is common practice when analysing risks associated with cost and schedule estimates, but the accuracy of any Monte Carlo analysis is heavily dependent on both the accuracy of the uncertainty ranges entered into the simulation, as well as the number of cycles that the simulation is run for. Monte Carlo simulation has a notoriously slow convergence rate, meaning you need to run the simulation over hundreds of thousands of cycles to get the accuracy of the outcome to within 0.5%. This is not really a problem for most applications with the computing power available today but, for hazardous operation risk analysis, where the predictive accuracy for Potential Loss of Life (PLL) is normally required to be within 0.01%, using Monte Carlo analysis is not a recommended approach.
On the other hand, dealing with the day-to-day risks experienced on any project can more often than not be managed quite effectively by only applying qualitative risk analysis methods. This does, however, require a clear and consistent set of qualitative analysis rules and definitions to be established and applied throughout the project. If a project does not have clearly defined risk acceptability threshold levels for each risk category, or is not able to apply a consistent approach in defining risk severity and manageability, then it will certainly not be able to effectively manage and control its risks. And this is where much of the scepticism about the usefulness of Project Risk Management software stems from. Many Risk Management software applications (be it Project, Business, Health & Safety, Financial or any other risk management application) are nothing more than risk identification and status reporting tools. Some may provide users with an “Expected Monetary Value” feature, which calculates the expected monetary impact value of a risk based on the likelihood of the risk event occurring. But how useful is this when the likelihood of risk occurrence is not empirically derived, or the impact cannot be accurately quantified in monetary terms? Our opinion is, "Not very" (see our previous blog post titled: “Expected Monetary Value – Where’s the Value?”).
Project Risk Management software can, and should, save projects time and money. But, most importantly, it needs to help preserve a project’s values and objectives by being an effective project risk management tool.
In pursuit of coming up with such a tool, we have recently updated our own in-house Project Risk Management application, which is now available as either a FREE or PRO version. Our FREE version is free-for-life, and allows subscribers to assign up to 5 active users with access rights at any level. Our PRO version allows subscribers to assign as many active users as they need, for as long as they need. This version also provides users with full access to all the other features and functionality of the software, which is partially restricted on the FREE version.
Now, we are by no means trying to claim that our software is the answer to every project’s risk management challenges, but we have strived to make the application as practical and useful as possible. It is a qualitative project risk management tool, so relies on applying consistent risk acceptability thresholds and definitions (for which default values have been pre-set, but can be changed to suit individual projects). The application can be applied to any type of project and, in the PRO version, data can be exported and integrated with universal ERM systems.
Should you wish to make use of this application, please feel free to download it from our software page. Any comments on how useful you find it, or any additional features you think it may require, would be most welcome and can either be posted on this page, or sent to us directly via our “Contact Us” page.
Delivering any project successfully is always a challenge but, in today’s competitive business environment, there is a growing demand for fast-track project delivery. That is, to deliver projects faster and cheaper than ever before, without compromising on quality or safety.
Most of us who have been involved in the delivery of projects will be familiar with what is commonly referred to as the “Project Triple Constraint”. This refers to the interlocking constraints of Time, Quality and Cost.
Generally, a project can achieve any two of these three combinations, but this will usually be at the expense of the third. In other words, a project may be:
Cheap and Good (at the expense of Time),
Fast and Cheap (at the expense of Quality) or,
Fast and Good (at the expense of Cost)
Project Triple Constraint
But can a project ever be Fast, Cheap and Good?
In a world where the pace of business is getting faster and faster, and everything seems to be needed yesterday, there is growing pressure on Project Management Teams to deliver projects faster than ever before, and without compromising on safety or quality. Most stakeholders will agree that this cannot be done without an increase in cost, but there are some ways of achieving fast-track project delivery and still keeping the cost impact to a minimum.
The unmovable constraint on most projects is Quality. Of course, quality standards, which are determined primarily by the required levels of Safety, Reliability, and Performance, may vary from one project to another. However, once these standards have been established, they are generally cast in stone and not negotiable. Cost and Schedule, on the other hand, are constantly being haggled over. The client will always want their project to be delivered faster and cheaper, while the contractor will always want more time and money.
Some things have inevitably become faster over the years, none more so than computers and communications. Twenty years ago, having a dial-up modem capable of transmitting data at 56 Kilobytes/second was considered pretty fast, but in 2018 we complain if we don’t have a broadband connection capable of transmitting data at 50 – 100 Megabytes/second. But, just because computers and communication systems have got a lot faster, that doesn’t mean everything else can be done at the same breakneck speed. Improved speed of communication and computing aside, materials still need to be procured, parts need to be fabricated, modules assembled, systems tested and, most importantly, quality needs to be assured to meet the regulatory, corporate and project standards.
There are, however, ways to achieve fast-track project delivery without necessarily paying a premium for the faster delivery rate. Some of the best ways to trim a project schedule, without incurring additional cost, are as follows:
Spend more time planning
Making sure your project plan is as robust and detailed as possible before you start the project will help minimise the risk of schedule delays, and allow you to redeploy resources to critical path activities if, and when, other activities are completed ahead of schedule.
Prioritise long lead item identification
The earlier you can identify items that have a long delivery lead times, the sooner you can commence the procurement process. In some circumstances, the lead times may still be unacceptably long, and you will need to look for alternative solutions. The sooner you know which items are potentially going to delay the project, the sooner you can start looking for alternatives.
Consider reverse engineering
You may find that some materials or components are not readily available, and their delivery lead times cannot be guaranteed. If any of these items are on the schedule critical path, it may be worth considering what alternative materials or components are available, and re-engineering the design around these instead. Of course, in such cases, the re-design needs to ensure that the specified quality standards are maintained.
Form an integrated project delivery team
Many projects suffer delays because of the large number of organisational interfaces involved in the project, comprising: The Client, Project Management Team, Regulators, Consultants, Contractors, Sub-Contractors, and Vendors, to name but a few. By including key members of each of these organisations within an Integrated Project Management Team, many issues that would otherwise take months to resolve can potentially be resolved in days, and other issues may be resolved before they even occur.
Consider module builds over stick builds
Some systems and structures are constrained in how, when and where they can be built, and in such cases, projects are forced to build them in-situ, at specific times, using specialist equipment and often with limited access to the installation location. Wherever possible, projects should consider building modular plants, systems and structures, that can be constructed and assembled off-site, in controlled environments, and with full access to all parts, tools and equipment needed to complete the module as quickly and efficiently as possible.
Prioritise schedule by system criticality
One of the simplest ways to develop a project schedule is to prioritise its activities against material availability and constructability. That is, the materials and equipment that are available soonest get designed, built, and installed first. However, that is seldom the fastest way to deliver a project to operational start-up. The fastest route to operational start-up would, more often than not, be to prioritise the schedule by system criticality. That means, prioritising the completion of critical system activities, even if the materials and labour for the non-critical activities are available earlier. This is especially important when on-going work on non-critical system activities can delay the completion of work on critical system activities.
These are just a few of the methods that can be employed to help fast-track project delivery, without compromising on quality, or increasing cost.
This year we will be holding a 5-day training seminar titled “Delivering Fast-Track Projects Successfully”, hosted by PetroKnowledge in Dubai. This training program will address all aspects of fast-track project delivery, from strategizing and planning through to change control and hand-over. To find out more about this training program, please contact us here.
Following on from one of our earlier posts, where we looked at the difference between Qualitative and Quantitative Risk Analysis, this time we will look at another Quantitative Risk Analysis method, being Monte Carlo Risk Analysis, also known as Monte Carlo Simulation.
Monte Carlo Simulation is a technique used to provide a better degree of certainty on the probability of outcomes in financial, project management, cost, and other forecasting models.
The first step in quantifying any risk is to make certain assumptions about both the likelihood of risk event occurrence and the impacts of this risk, should it still occur. Most of the time, our assumptions will be based on either historical data, expert knowledge in the field, or past experience. At other times, it will be pure guess-work. Monte Carlo Simulation takes the guess-work out of predicting both the likelihood of risk event occurrence and the risk outcomes by randomly selecting a value within the range of uncertainty, and calculating the likelihood of this value being the correct result. It does this by repeating the calculation multiple times (hundreds of thousands to millions of times) using other randomly selected values within the uncertainty range, and comparing the results of all values.
As an example, let’s look at a typical project schedule. Project schedules are made up of multiple activities, many of which are interdependent on each other. If we consider three interdependent activities, each with its own estimated duration, the schedule may look like this:
Here, each task cannot start before the preceding one is complete, so the estimated duration for completion of all three tasks is 17 weeks. However, if we now consider a range of uncertainty in duration for completing each task, the schedule will look like this:Running a Monte Carlo simulation for each activity 100,000 times across the range of duration uncertainty, we end up with a distribution of the outcomes which shows us that the most likely duration for all three tasks to be completed is 18 weeks, and not 17 as was initially estimated.
Monte Carlo Simulation - Most Likely Completion Time
Another way of looking at this, would be to consider the cumulative probability distribution curve. This gives us an indication of the probability of completing the activity within a certain time-frame.
Monte Carlo Cumulative Probability Distribution
From the cumulative probability curve shown above, we can establish that there is a 50% chance of completing the activity within 17 weeks, and a 90% chance of completing the activity within 20 weeks.
We can therefore use Monte Carlo simulations in situations where experience or expert knowledge is lacking to give us a statistically based value on the probability of a certain event occurring, or the probability of the outcome of that event. However, this value will only be as accurate as the information provided to run the simulation. In other words, if your range of duration estimates for each activity is either too broad or inaccurate to start with, the value of the simulation output will, likewise, not be very accurate.
In our next post, as requested by one of our blog followers, we will look at the difference between Risk and Uncertainty.
For more information about our project risk management services and software, or if you just want to express your own views on the subject, please feel free to get in touch via our “Contact Us” page.
The process of risk prioritisation is affected by several factors including; risk attitude, risk sensitivity, resource availability, risk severity and risk manageability.
Risk Prioritisation by Attitude
An organisation’s risk attitude is made up of a combination of its risk appetite, risk tolerance and risk threshold. These three attributes are defined as:
Risk Appetite – The degree of uncertainty an entity is prepared to accept in pursuit of its objectives.
Risk Tolerance – The degree, amount, or volume of risk impact that an organisation or individual will withstand.
Risk Threshold – The level of uncertainty or impact at which a stakeholder will have a specific interest. Below the risk threshold, the stakeholder will accept the risk. Above the risk threshold, the stakeholder will not accept the risk.
If an organisation has a high risk appetite but low risk tolerance, it will tend to prioritise its risk responses around the anticipated level of the risk impacts, rather than the level of uncertainty in risk event occurrence. This may be due to the fact that the organisation’s business strategy is to operate in unstable, or high threat environments, where they are constantly exposed to the occurrence of risk events. In this case, the organisation will develop its risk response plan to prioritise the neutralisation (or optimisation, in the case of opportunity risks) of risk impacts rather than focussing on controlling the occurrence of risk events.
Conversely, an organisation with a low risk appetite, but high risk tolerance (a very unusual case!) will prioritise their risk responses by focussing on minimising the probability of risk event occurrence, and put less effort into controlling the risk impacts.
In both cases, the organisations’ risk thresholds will be defined by their respective risk appetite and risk tolerance levels. Risk attitude is also largely determined by the industry sector in which an organisation operates.
In the Oil & Gas and Mining industries, where personnel safety is a major factor, and human fatalities are known to occur on a relatively regular basis, the industry accepted threshold level for loss of life in this sector is 1 x 10-3, or 1 fatality every 1,000 years. Anything over this is considered to be unacceptably high. However, in the Security Services and Defence industries, human fatalities, although not desired, are accepted as part of the job and can be a relatively high frequency occurrence. In these industries, the acceptable threshold for loss of life could be as high as 1 x 10-1, or 1 fatality every 10 years.
In other industry sectors, personnel safety may not be a major driving factor in risk management, and risk thresholds will be defined instead by the types of risks and impacts prevalent in these sectors. Some of the main risk areas around which organisational risk attitudes and thresholds are defined include:
Finance
Health & Safety
Quality
Production/Performance
Environment
Social
Legal
Risk Prioritisation by Sensitivity
Sensitivity analysis is a method of determining which risks will have the most potential impact on a project. This is typically done by interrogating the uncertainty levels in each risk, and comparing them to the uncertainty levels of all other risks.
In doing so, one can determine the extent to which the uncertainty of a risk may affect the outcome of a project in relation to the uncertainty of all other risks.
Another way of looking at this is to consider sensitivity as a function of change in risk outcome with respect to change in risk input. This applies equally to the range of uncertainty in risk occurrence as it does to the range of uncertainty in risk impact.
In other words, the occurrence of a risk event may be highly sensitive to a set of conditions in one case, while the impacts of a risk may be highly sensitive to a different set of conditions in another case.
This leads to a number of options when prioritising risk by sensitivity.
In the case of risk event sensitivity, risks of this type will require further assessment to develop a better understanding of which conditions or variables have the greatest influence on the probability of risk event occurrence.
In the case of risk impact sensitivity, risks of this type will require the development and implementation of multiple response plans to control the conditions that have the greatest influence on the risk impacts.
Therefore, prioritisation of the types of action required (be it further assessment or implementation of response plans) depends on the type of sensitivity that the risk is subject to.
Where uncertainty ranges from negative to positive values, risks may be plotted on a Tornado diagram, where risks with the greatest uncertainty equate to being the least stable, while risks with the smallest uncertainty equate to being the most stable.
Risk Uncertainty Tornado Diagram
Where the variances in risk uncertainty reflect one type of risk outcome only (positive or negative, but not both) the risks can be plotted on a Pareto diagram by arranging the risks in descending order, from most sensitive to least sensitive.
Risk Sensitivity Pareto Diagram
By way of illustration, consider a project where we need to determine which work packages have the greatest effect on the uncertainty in the total cost of the project.
Firstly, we need to estimate the uncertainty in the cost of each individual work package. Secondly, we determine the associations, or dependencies, between each pair of work packages.
The sensitivity of the uncertainty in the total project cost with respect to each work package is proportional to the combination of the activity uncertainties and the associations between activities. That is, the uncertainty in the total cost is affected not only by the uncertainty in each work package, but also by how much each work package affects, and is affected by, the others.
As an elementary example, the uncertainty in the cost of a construction project may be more sensitive to outdoor activities than to indoor activities, because bad weather can cause a number of outdoor activities to run over budget and over schedule simultaneously. Whereas, indoor activities are typically not linked so tightly to the weather.
By quantifying the relative sensitivities for all work packages, and sorting them from largest to smallest, we can identify those work packages with the largest sensitivities, which are those to which the project manager should give the highest priority. Note that the absolute values of the sensitivities have no importance here, as our only concern is with the relative values.
Risk Prioritisation by Resource Availability
This is not something that Risk Managers should be doing by choice but, sometimes, it is unavoidable and risks need to be prioritised in this way.
Prioritisation by resource availability should normally only occur in the event of assessment and/or control needing to be carried out by specialist resources, which are not readily available to the project.
This may include the use of human resources with specialist skills in assessing or controlling risks of a certain nature, or it may require the use of specialist materials or equipment needed to assess or control the risk.
In such cases, the affected risks will need to be placed on a monitoring list until such time that the required resources become available. If any changes in severity or manageability of the risk occur, the response plan may need to be revised to deal with these changes.
Risk Prioritisation by Severity
All things being equal (in terms of risk attitude and resource availability) risks are most often prioritised by their severity. That is, the higher the probability of risk event occurrence and the higher the impact of the risk event, the higher the risk response priority.
Determining the severity of a risk is initially done qualitatively. In most cases this would involve using a Probability/Impact matrix to define the severity ranking of a risk by multiplying its probability rank with its impact rank.
As discussed in our blog post “Risk Matrix Sizing: Does size really matter?”, the size and format of the risk matrix makes little difference (unless extremely small, or extremely large), as long as the risk ranking ranges and definitions are consistent.
Irrespective of whether you use a 4x4 or 5x5 matrix size, or whether the risk ranking values are formatted from bottom to top and left to right, or the other way around, the product of Probability and Impact will always tell you how severe a risk is in relation to other risks measured the same way.
Most of the time, this is sufficient information to establish the priority of the required risk response but, depending on the nature of the risk, it may be necessary to carry out further quantitative risk analysis in order to determine a more precise risk response priority level and appropriate response actions.
4x4 Risk Matrix
5x5 Risk Matrix
Risk Prioritisation by Manageability
Risk manageability is a function of expected risk occurrence date and the number of response actions available to control the risk. This relationship can be depicted graphically by defining a range of risk occurrence dates and a range of available response actions which will, in turn, define the overall manageability of a risk.
For example, if the date of a known risk event is expected to occur within the next eight weeks, this would be considered by most projects to be “Imminent”. And, if there were only a limited number of response options available to control either the probability of risk event occurrence, or the impacts of the risk should it still occur (but not both), then the manageability of this risk would be considered “Very Low”. In this case, the priority of the risk needs to be raised so that whatever response options are available, get implemented as a matter of urgency.
The lower the manageability of a risk, the higher its priority should be in terms of management urgency.
Risk Manageability Chart
For more information about our project risk management services and software, or if you just want to express your own views on the subject, please feel free to get in touch via our “Contact Us” page.
Expected Monetary Value (EMV) is often used in risk analysis to provide an indication of the financial impact of a risk. But, in practical terms, how valuable is this technique?
The answer depends entirely on how the EMV calculation is applied in a risk scenario.
Expected Monetary Value is defined mathematically as: EMV = ∑ (Pi × Ii)
Where:
P = Percentage probability of risk occurrence
I = Impact in monetary terms
When applied to risks that have been qualitatively analysed, and used in isolation, EMV has little real value. The reason being that the probability of risk occurrence, and the impact value of qualitatively analysed risks, are both likely to contain relatively high degrees of uncertainty.
As EMV is calculated as a product of Probability and Impact, the uncertainty of the result is always higher than the uncertainty of the individual components that make up the equation.
Risks that have been quantitatively analysed generally produce more accurate EMV results, but this depends predominantly on the type and accuracy of the quantitative analysis carried out, and whether it has been applied to the probability of risk occurrence, the risk impacts, or both.
In a nutshell, the more uncertainty there is in a risk's probability of occurrence and its impacts, the less accurate the EMV result.
EMV can be used as a relatively simple "first-pass" method to calculate the Contingency Reserve required for a project, where Contingency Reserve is an amount of money included within the overall project budget for use by the Project Manager in response to the occurrence of known risks.
However, in most high value projects, one cannot practically set the project contingency reserve at the total project risk EMV, as this would most likely drain the sponsoring organisation of its financial reserves.
On any one project, there may be several risks that have a very high impact value (ranging upwards from 80% of the project CAPEX budget), albeit with a very low probability (less than 1% chance of occurring). If you therefore identified between ten to fifteen risks that fell into this category, the EMV of these risks alone could equate to up to 10% of the total project budget. If you then add the EMV of all other risks on the project, there is a good chance that the total EMV could approach, or even exceed, the project CAPEX budget.
Using risk EMV may be a good starting point in calculating contingency reserve, but it should by no means be the only defining method.
Expected Monetary Value and Decision Tree Analysis
Applying the Expected Monetary Value formula is probably most useful when assessing risks in conjunction with Decision Tree Analysis.
When used on its own, Decision Tree Analysis is essentially a qualitative means of deciding the best course of action whenever there are multiple options available, and a level of uncertainty surrounding each option.
However, using “best judgement” in deciding a course of action, without having any empirical data to back up your decision, is generally regarded as a last resort in project decision making. This is especially true where the outcomes of that decision can significantly affect the values and objectives of the project.
Applying the EMV technique to decision trees provides each “chance” (or uncertainty) node with the expected monetary impact of that uncertainty. This, in turn, helps to make a more informed overall decision once the EMVs of each “chance” node along a decision tree branch have been added up and compared against the EMV’s of the other decision tree branches.
By way of example, let us consider a decision that needs to be taken by a commercial property owner who wants to increase their revenue in an existing commercial block. In this particular case, they need to decide whether to:
Maintain the block
Renovate the block
Re-build the block
Each of these options carries both a cost and a level of uncertainty around the impact of each option. Through market research, the property owner has established that there is a potential to increase the revenue of their block by up to $60 million over the 20-year land lease period that they hold. However, this potential is largely dependent on the quality of the outlets and volume of customers this will generate.
The cheapest option will be to just maintain the block and hope to attract more customers by keeping the block as clean and well maintained as possible. This option would cost $3 million over the 20-year lease period. However, the best result they could hope for in this case would be an overall increase in revenue of $20 million and their lowest expectation would be no increase in revenue.
The next option would be to renovate the block to improve its layout, access, and services. This option would cost $8 million in construction, $2million in trade disruption and the same $3 million in maintenance, totalling $12 million. In this case the maximum expected increase in revenue would be $45 million and the minimum expected increase would be $25 million.
Their final option would be to rebuild the entire block to provide more space, better facilities and an overall improvement in the architecture and appeal of the block. This option would cost $17 million in construction, $6 million in trade disruption and would reduce their overall maintenance costs to $2 million, totalling $25 million.
In this case the maximum expected increase in revenue would be $60 million and the minimum expected increase would be $30 million.
Obviously, the owner would like to maximise the increase in their revenue, and doing a complete rebuild of the block would potentially give them this. But what are the chances that they will realise this maximum return?
At this point we have to consider the probability of each outcome. In this example, let us assume the cheapest option of just maintaining the block has a 90% chance of success due to the demographics of the area, leaving a 10% chance that this strategy will fail.
For the renovate and rebuild options, let us assume that each has an equal 70% chance of achieving their respective maximum targets, and a 30% chance of achieving their minimum expectations.
In order to determine the best option for the property owner to take, we now need to map out their decision tree, along with the associated costs, expected returns and probability of achieving these returns. This is shown in the diagram below.
Decision Tree using Expected Monetary Value
From this decision tree, we can establish that the largest total EMV for the three options (after cost deductions) is $27 Million, which is our expected average return between the best and worst case scenarios for renovating. This predicts a slightly better outcome than if we chose to rebuild, and choosing to maintain the block gives us the worst predicted return.
For more information about our project risk management services and software, or if you just want to express your own views on the subject, please feel free to get in touch via our “Contact Us” page.
Two of the most commonly used tools in project schedule planning are the “Project (or Program) Evaluation and Review Technique” (PERT) and the “Critical Path Method” (CPM).
But what are the differences between them?
PERT was developed as a project schedule planning technique in the 1950’s for the U.S. Navy Special Projects Office, while CPM was developed at roughly the same time by Morgan R. Walker and James E. Kelly for DuPont. Both methods are used to identify the minimum time needed to complete a project by considering all inter-dependant project activities that form the longest path or duration.
There are really only two fundamental differences between PERT and CPM, and these are:
PERT applies an “Activity-on-Arrow” network diagram, whereas CPM applies an “Activity-on-Node” network diagram. Activity-on-Arrow means the network diagram depicts each milestone event as a node, and shows the activity information on the arrows joining each milestone event. Activity-on-Node shows the activity information as a node, and links one activity to the next, rather than linking one milestone to the next. The differences between the two schematic models is shown below.
PERT Activity on Arrow Network Diagram
CPM Activity on Node Network Diagram
The second, and more important, difference is that traditional CPM applies a single duration and cost estimate for each activity, whereas traditional PERT applies a 3-point weighted average duration estimate (optimistic, most likely, and pessimistic) for each activity, and does not consider cost. The PERT weighted average duration is calculated as follows:
Te = (To + 4×Tm + Tp) ÷ 6 where:
Te = Expected Duration
To = Optimistic Duration
Tm = Most Likely Duration
Tp = Pessimistic Duration
These days, CPM and PERT have been largely absorbed into a single common technique which applies the preferred CPM “Activity-on-Node” diagrammatic model, and uses the PERT 3-point weighted average duration calculation.
Both PERT and CPM rely upon the analysis of four primary schedule components, being:
A list of all activities required to complete the project
The expected time that each activity will take to complete
The dependencies between each activity
Logical start and end points for each set of activities
Critical Path Analysis:
The critical path in any project is the longest path of inter-dependant activities required to achieve a logical end point. Critical Path Analysis takes into account the earliest and latest times that each activity can start and finish without making the project longer. In doing so, the analysis determines which activities are “critical” to the project schedule, in that all critical activities reside on the longest path from project start to project finish, and thereby define the minimum overall project duration.
Activities that can be delayed, or extended beyond their planned duration, without extending the overall duration of a project are considered to be non-critical activities that have “float” (also known as “slack”). An activity that can be delayed or extended without causing a delay in any subsequent activities is said to have “free float”, and an activity that can be delayed or extended without causing a delay to the overall project is said to have “total float”.
The amount of total float available to any activity is calculated in one of two ways: Either by subtracting the earliest start date from the latest start date, or by subtracting the earliest finish date from the latest finish date of the activity. Both methods will yield the same result, which will be the amount of time that the activity can be delayed without affecting the latest finish date of any subsequent activities.
The amount of free float available to any activity is calculated by subtracting the earliest finish date of the activity from the earliest start date of its nearest direct successor activity. This is the amount of time that the activity can be delayed without affecting the earliest start date of any subsequent activities. When an activity has zero total float, it will also have zero free float.
Critical Path Analysis is also used to calculate the “drag” on a project. In other words, the amount by which a project's duration is extended by each critical path activity. Drag is calculated by comparing critical path activity durations with each amount of total float in all other parallel activities. If a critical path activity has no other activities in parallel, its drag is equal to its duration. If a critical path activity has other activities in parallel, its drag is equal to whichever is less: its duration, or the total float of the parallel activity with the least amount of total float.
Schedule Risks:
A schedule is one of the project drivers that is most susceptible to risk. This is because schedules comprise multiple inter-dependant activities, each of which usually contain multiple uncertainties.
Consider, for example, one of the very first activities involved in the construction of a building, which is site excavation and preparation. This is a relatively straight forward process, but it is reliant on a number of factors in order for it to be completed in line with the planned project schedule. These include:
Ensuring the required machinery, materials, utilities and other resources are all available for use on the planned start date, and remain so throughout the activity duration.
Ensuring the required machinery, materials, utilities and other resources remain operational throughout the activity duration.
Ensuring all site permits are in place and valid throughout the activity duration.
Ensuring the machinery, tools and equipment can cope with site conditions.
Ensuring unexpected weather does not affect progress.
Ensuring unexpected labour issues do not affect progress.
Ensuring unexpected health & safety issues do not affect progress.
Ensuring unexpected security issues do not affect progress.
Ensuring unexpected regulatory issues do not affect progress.
Most construction companies are well versed in the management of these types of risks, and will plan for them as a matter of course. The point is merely to highlight the number of risk factors, and related uncertainties, associated with even the most basic of project activities. Add to that the number of different activities required to complete a typical project, along with all their inter-dependencies, and the level of uncertainty in a project schedule can grow exponentially.
One possible solution to maximise schedule robustness is to include a safety buffer in the baseline schedule to absorb any anticipated disruptions. This is called proactive scheduling. However, pure proactive scheduling is not a realistic option. Incorporating sufficient safety in a baseline schedule, which allows for every possible disruption, would undoubtedly lead to an unacceptably long schedule. A second approach, termed reactive scheduling, consists of defining a procedure to react to a range of disruptions that cannot be absorbed by the baseline schedule.
At the heart of schedule risk is the critical path, as this is the longest activity path which defines the minimum project duration and contains the least amount of total float. Any delay to activities on the critical path which have zero total float will delay the overall project schedule. It is therefore crucial to protect the critical path as much as possible, and the most effective way to do this is to ensure that the planned duration of each activity is as accurate and robust as possible.
For more information about our project risk management services and software, or if you just want to express your own views on the subject, please feel free to get in touch via our “Contact Us” page.
When it comes to project risk management, it is not only important to understand the definition of risk, but it is equally important to know how best to describe risk. A badly described risk can, at best, result in false assumptions being made about the risk and, at worst, result in the wrong actions being taken to control the risk which turn out to be completely ineffective.
Risk is essentially made up of three components, these being:
Threats or Opportunities
Risk Events
Risk Impacts
In order to put the importance of describing risk accurately into context, I would like to recount a personal event which took place one day, back in 1987, while I was competing in a national hang gliding event.
It was a warm, sunny afternoon in July 1987, with a steady 15 knot north easterly wind blowing. Around forty fellow hang-glider pilots were gathered at the top of One Tree Hill, all having entered the annual spot-landing classic competition that year. As luck would have it, I was currently lying in joint second position. The objective of the competition was to make a controlled landing within a 10-meter diameter target, marked on a field about 1 kilometre away and some 2,000 feet below us. At around 4pm, wind conditions changed and swung from being a steady north easterly to a variable north westerly. Most pilots packed up their gliders and headed back down the hill to go and enjoy a cold drink at the local hotel, but a few of us remained behind, assessing conditions. All I needed was to make a controlled landing in the outer-most circle of the target, and that would be enough to make me the outright winner of the competition. After a brief consultation with my instructor and mentor, I decided to go for it. The launch ramp was in a fixed position, facing north east, and couldn’t be moved, so I needed to adjust my take-off run to compensate for the change in wind direction and strength. I did this by putting most of my weight against the left down-tube and angled my run as much as I could into the wind as I charged down the ramp. By virtue of the fact that I am still here to tell this story today, proves that my strategy worked and, even better, I managed to achieve my goal by making a somewhat tenuous landing within the target ring, which was judged to have been “controlled”, but only just!
After landing, a few pilots who had stayed behind to watch, came up to offer their congratulations. However, I remember one of them saying, “Mike, you took quite a risk taking off in those conditions!” I had to agree that it certainly had been a risky decision, but I was also interested to know what his view of the risk was, so I asked him. His response, which was also something that I could not disagree with at the time, was, “Well, there was a big risk that you could have been killed.”
Now, being only twenty years old at the time, and not all that well versed in the nuances of risk management, this statement made perfect sense to me and I left it at that. Some thirty years later, however, with a few more miles under my belt and now spending most of my days reviewing risks and risk descriptions, I’m a bit more judgemental when I hear statements along the lines of, “There is a risk that someone could get killed”. Primarily because the person who has uttered that statement has not actually described the risk at all. They have only described one of the potential impacts of the risk.
And that, finally, gets us to the importance of describing risk accurately. As long as all three components that make up a risk are captured in the description of your risk then, I would say, you have accurately described the risk. That would be to:
Describe the threat (or opportunity) which is the source of the risk,
Describe the event that could result from the identified threat or opportunity,
Describe the consequences (or impacts) of that event.
But how does accurately describing a risk help you manage it? Let’s go back to my risky hang-gliding experience for a minute and have a look at how that risk may be better described and, hence, how we can manage the risk based on its description.
Firstly, what was the threat? For the purposes of this analysis, we need to temporarily set aside all the other threats associated with hang-gliding in general, and focus only on the immediate threat which increased the “effect of uncertainty on my objective”. And that was a change in weather conditions.
Secondly, what was the event? The event was a possible failure to take off correctly.
Finally, what were the potential consequences? Well, these are numerous and could have ranged from simply not making it to the target landing zone, to damaging the glider, to personal injury or even death. Now, in terms of Threat Risk Management, we generally look at the worst possible outcome so, in this case, we would consider that to be death.
Therefore, an accurate description of the risk I took back in July 1987 could be stated as: “Failure to take-off correctly (EVENT) because of adverse weather conditions (THREAT), resulting in potential death (IMPACT)”.
However, when it comes to controlling risks, specific plans and actions need to be implemented which separately manage the probability of risk occurrence and the subsequent impacts of the risk. It is therefore often more practical to describe the risk in terms of the threat (or opportunity) and event only, and then list out the potential consequences separately. This then enables one to hone in on specific probability response plans and separate impact response plans. So, taking the hang-gliding risk description one step further, we could revise this to say the risk is: “Failure to take-off correctly (EVENT) because of adverse weather conditions (THREAT)” and the potential impacts of this risk are: “A long walk home, property damage, personal injury or death”.
With that revised description, we are now better placed to identify suitable response plans separately for probability and consequence, which may be tabulated as follows:
PROBABILITY MITIGATIONS
IMPACT MITIGATIONS
Align hang-glider into prevailing wind as much as possible.
Identify alternative safe landing zone, closer to take-off.
Lean weight into windward side of the hang-glider.
Wear suitable protective clothing and equipment.
Make your take-off run as fast as possible.
Ensure you have adequate life, medical & property damage insurance.
You will notice that all the probability mitigations taken in this example dealt with mitigating the event only, and not the threat. In most risk situations, we would look to mitigate the threat as well as the event, to reduce the probability of risk occurrence. In this instance, however, I had accepted the threat of adverse weather conditions and was therefore committed to the event, leaving me no option but to mitigate the event only and, of course, the potential impacts!
For more information about our project risk management services and software, or if you just want to express your own views on the subject, please feel free to get in touch via our “Contact Us” page.
Project Risk Management is a continuous and collaborative process, which includes the application of both Quantitative and Qualitative Risk Analysis techniques (See our previous article on this subject: "Qualitative vs. Quantitative Risk Analysis: What’s the difference?"). Most projects will include several mandatory Quantitative Risk Analysis studies in their scope, however, managing the day-to-day risks inherent on every project is often over-looked in terms of formal Qualitative Risk Analysis requirements. Managing these types of risks typically requires on-going collaboration between project team members, and regular risk review workshops to be held. The methods used in Qualitative Risk Analysis can vary significantly, depending on the type of project being run and the risk management resources available to the project. In this article, we consider five of the most useful Qualitative Risk Analysis techniques applied in project management, which are as follows:
Delphi Technique
This is a form of risk brainstorming, but the essential difference between traditional risk brainstorming and applying the Delphi Technique is that the Delphi Technique makes use of expert opinion to identify, analyse and evaluate risks on an individual and anonymous basis. Each expert then reviews every other experts risks, and a risk register is produced through continuous review and consensus between the experts.
SWIFT Analysis
Standing for “Structured What-If Technique”, this is a simplified version of a HAZOP. SWIFT applies a systematic, team-based approach in a workshop environment, where the team investigates how changes from an approved design, or plan, may affect a project through a series of “What if” considerations. This technique is particularly useful in evaluating the viability of Opportunity Risks.
Decision Tree Analysis
Similar to Event Tree Analysis, but without providing a fully quantitative output, Decision Tree Analysis is most often used to help determine the best course of action wherever there is uncertainty in the outcome of possible events or proposed plans. This is done by starting with the initial proposed decision and mapping the different pathways and outcomes as a result of events occurring from the initial decision. Once all pathways and outcomes have been established, and their respective probabilities evaluated, a course of action may be selected based on a combination of the most desirable outcomes, associated events and probability of success.
Bow-tie Analysis
This is one of the most practical techniques available in helping identify risk mitigations. Bow-tie Analysis starts by looking at a risk event and then projects this in two directions. To the left, all the potential causes of the event are listed and, to the right, all the potential consequences of the event are listed. It is then possible to identify and apply mitigations (or barriers) to each of the causes and consequences separately, effectively mitigating both the probability of risk occurrence and the subsequent impacts, should the risk still occur.
Probability/Consequence Matrix
This has become the standard method in establishing risk severity in Qualitative Risk Analysis. Risk Matrices will often vary in size, but they all essentially do the same thing, and that is: Provide a practical means of ranking the overall severity of a risk by multiplying the likelihood of risk occurrence against the impact of the risk, should it still occur. Through ranking risk probability against risk consequence, one is able to not only determine the overall severity of the risk, but also determine the main driver of the risk severity, be it probability or consequence. This information is then useful in helping identify suitable mitigations to manage the risk, based on its prominent drivers.
For more information about our project risk management services and software, or if you just want to express your own views on the subject, please feel free to get in touch via our “Contact Us” page.