ODDSCO's Project Risk Assessment Method Tutorial


An introductory tutorial about Project Risk Assessment from the practitioner's perspective.


In essence, this tutorial is a paper entitled EVALUATING PROJECT RISK WITH LINGUISTIC VARIABLES that was peer reviewed, presented, and published in the 1994 National Council on Systems Engineering (NCOSE) Symposium Proceedings. (In 1995, NCOSE became INCOSE, for International.)

An Integrated Product/Process Team (IPPT) Supporting Risk Assessment Process

Abstract. A simple, reliable method for assessing the project technical risks, schedule risks, and cost risks, usable by virtually anyone with moderate arithmetic skill, is set forth. It provides an aggregate risk figure of merit range for system risk by tailoring the project work breakdown structure hierarchy and incorporating relative importance weighting in the risk model. It extends the linguistic variable quantification based on one fuzzy logic hedge operator with a utility graph. It takes an individual's or corporate psychological attitude toward risk aspects into account. It can assess uncertainty as well as event likelihood and severity. Uncertainty is retained, to provide a qualitative expression for the overall risk range as well as the expected value. NOTE: The technical risks aspect described herein is closely related to patient and user safety centered Risk Analysis required by the FDA Quality System Regulation Design Controls for medical devices development.

At the end of this tutorial is an opportunity to pose question(s) on this subject.


EVALUATING PROJECT RISK WITH LINGUISTIC VARIABLES

Return to Home Page [Use a Return Link after any of the listed subsections to quickly reach this option.]

TABLE OF CONTENTS (Return Links)


Introduction

In the general business context, risk is the potential for occurrence of an event with defined detrimental effect on an enterprise. Decision problems of interest to businesses of any size involve substantial uncertainty regarding future accomplishment of project goals. Project tasks are defined to fulfill technical objectives within planned time and budget limitations, so their successful accomplishment involves managing the associated risks. The risk analysis becomes complex because the technical, schedule, and cost risks must be considered as interrelated aspects in one decision model. The priority ranking for directed management attention traditionally is based on expected value for the risks, which is the combined likelihood of occurrence and severity of consequences for each potential event.

Estimating and quantifying risk combines assessment of the severity of a risk event with its likelihood of occurrence, which implies a probability distribution derivable from discrete data. But, risk necessarily must be evaluated from subjective, qualitative, or "fuzzy" information, so analysis results remain correspondingly uncertain. Worse, that uncertainty is not statistical. It is not related to accuracy limits, normal distributions, or to artifacts of large-sample historical data. Instead, it relates to human judgments regarding future events, so probability theory is insufficient in practice for risk assessment.

The Department of Defense (DoD) considers the project risk aspects to be acquisition risk elements. Two further elements of decision risk for the major DoD programs are operational risk and support risk. Events considered to be risks to DoD programs are called threats. Threats must be placed under a formal Risk Management plan.

Complex system risks relate to personnel effectiveness or to sufficient budgets for supporting project objectives. Both time and funding often are arbitrarily reduced into fantasies when top managers perceive initial estimates as excessive. A typical risk is failing to adopt the system engineering viewpoint; inadequately identifying potential problems so decisions have little chance of success. That is, by not recognizing the need for active risk management, the default situation holds the very large risk of not knowing the risks. Project decisions become wishful thinking.

The significance of each project risk aspect is different to each involved person. Perceptions will override the realities. Large risks to engineers (who actually make products work) may be considered minor to company executives. At the program level, risk may become political, affecting initial or ongoing funding approval.

Project risk assessment involves evaluating a system's design maturity, complexity, and dependence on existing systems, as well as consequences of proposed changes. Risk assessment uses natural language. Any technique for quantification must translate qualified or "hedged" risk assessment expressions into either numerical values or figures of merit for further use in risk models.

The realm of risk assessment is full of uncontrollable factors, best guesses, and semi-intuitive estimates by persons with limited or no training in relevant areas. Therefore, we are dependent on both expert and inexpert forecasting to avoid project failure. Because it involves determining both the likelihood of an event and impact of the consequences, risk assessment brings up personal attitudes toward potential losses. Such psychological biases also should be taken into account [Saaty (Returns to here_1.)].

Uncertainty expands when we consider the assumptions leading to estimates as well as to assigned probability for an event occurrence. Judgments improve with added knowledge, but the conditions of actual use and other such product information required for risks assessment often is scarce or unavailable.

Even when data are plentiful, their numeric expression implies no uncertainty or, at best, precision unsupported by the primary sources. That implication is false and it never gets better [Schmucker (Returns to here_1.)]. High mathematical rigor applied thereafter may keep uncertainty of final results from becoming worse, but does nothing to improve it.

Another difficulty is in actually reducing risk. We must increase one or more of information, control, or time to do so. Adding people to a project in schedule trouble helps only in a few circumstances. The reorganization and familiarization of newcomers usually adds delay.

Project-related discussions mentioning Risk Management do so without explaining how you may actually quantify the risk assessment. The reason is not a big mystery: Risk quantification is essentially an unsolved problem, in the theoretical sense [Schmucker (Returns to here_2.].

A practical approach to quantifying risk is necessary due to the overwhelming impact some system risks can have upon human lives and livelihoods.

Risk assessment should continuously reflect imprecision of input. This helps avoid presumptions based on ignorance due to missing or inadequate knowledge. The necessity for accepting reality of imprecision is where Fuzzy Set Theory (to determine the plausibility of an item's membership in a set) and associated fuzzy logic applies [Schmucker (Returns to here_3.)].

Return to Table of Contents


Project Risk Aspects

Some definitions:

Technical Risk is a measure of the likelihood that a product's performance requirements will be fulfilled within both schedule and budget limitations. Potential loss due to failure to fulfill performance requirements can be to society as well as to a business. It can be from increased hazard to human life and limb, increased environmental damage, inability to accomplish a defined mission, and so forth.

Schedule Risk is a measure of the likelihood that a supplier will fulfill contracted delivery schedules with a technically satisfactory product while not exceeding cost bounds.

Cost Risk is a measure of the likelihood that the final cost for an on-time delivered and technically satisfactory system or its subsidiary part(s) will not exceed the expenditure budgeted for that item.

Abatement is reduction of the likelihood of occurrence for a defined risk event, while Mitigation is reduction of the severity of consequences that accompany occurrence of the defined risk event. Insurance is an example of mitigation.

On most projects, risk can be reduced to "avoidance" or abated in any two of the three aspects at once.

Return to Table of Contents


Risk Analysis

Risk analyses typically will occupy three overlapping tasks. First, identifying everything that possibly can go wrong and how each defined event impacts the objective. Second, estimating the likelihood of event occurrence and the level of uncertainty for the estimates. (Traditionally, each event is assigned a probability for expressed likelihood.) Third, ranking the high likelihood risks in accordance with severity of their potential damage to project goals and determining means for their avoidance or mitigation, if any.

Some analyses emphasize only one risk aspect, if that is the overriding project political concern. Project health is truly good, however, only when controlling technical milestones are being met on schedule while costs remain within bounds.

Probabilities are not highly intuitive even for many persons with mathematical training. Everyone knows to be very concerned about high probability risks of high severity with low uncertainty. Many people, however, will underestimate potential danger in a low probability risk with significant consequences and high uncertainty. The low assessed probability convinces them that no real problem exists [Lewis (Returns to here.)]. Most people have extreme difficulty in accurately evaluating low probability but high consequences events (such as nuclear reactor meltdown), so they ask their lawmaking representatives to provide the impossible: zero risk. Through the Congress, they allow extra-legal agencies to regulate clearly beneficial products into their disuse [Lewis (Returns to here.)].

Balancing the risks against potential gain, including developing strategies that will work with most expected outcomes and minimize damage from the rest, is the essence of decision making. If the probability of success for part of a project is low, you may apply additional resources of people and budget. If risk appears very low, you may reduce applied resources to balance risks with another task or project.

To recap, we want to quantify risk in proportion to its expected value for each identified event. That is, to assess risk by combining likelihood of event occurrence (probability) with a value for its severity of effect.

We also want to quantify uncertainty of estimates and to weight risk elements so they can be added to obtain a system level evaluation. Risk is dynamic for each aspect throughout a system life cycle. Information, time, and control continuously differ from previous assessments for many project elements. We want to easily change estimates and quickly obtain the results.

Return to Table of Contents


Risk Analysis Model Design

One flaw in traditional risk analyses is incomplete representation of identified risk elements. Estimates are made for the probability of occurrence and severity of every identified event, but the relative importance of each to the project usually is not made explicit until it relates directly to tasks on the Critical Path. All three of the risk aspects should be included as weighted factors in most system development projects.

Weighting is a process with traps for the unwary when the number of parameters exceeds three. Algebra for the simultaneous equations (to solve the ratios from estimates of relative importance) becomes tedious and subjectivity remains. A powerful method for converting paired-comparisons into decimal weights was developed years ago [Saaty (Returns to here_2.)]. Its strengths are feedback of input inconsistency and an algorithmic simplicity. It enables implementation with personal computer spreadsheets such as Lotus Corporation's 1-2-3® or Microsoft's Excel®.

Risk assessment attempts to quantify confidence in the estimates of attainable performance, delivery schedule, and cost for each system design concept. But, seldom is anything more solid than analyses or simulation model output available to measure. Aggregate system technical risk, schedule risk, and cost risk must be estimated to facilitate informed risk management.

When critical factors for project success are uncertain, an additive expected-value model may be used [Saaty (Returns to here_3.)]. This gives theoretical foundation to the WBS based decision modeling approach. A simple hierarchical model makes overall system risk a first level or parent parameter with three risk aspects, and their relative importance weighting, as children. This eliminates the need for a separate hierarchy for the risk analysis and risk may be treated as a system performance factor.

The risk model structure best suited for hardware is an assembly hierarchy, related to things which may be made or bought, from top assembly down to a selected part. The project work breakdown structure (WBS) is an obvious arrangement for military acquisitions, being subdivided in accordance with likely major subsystems.

A key criterion here is that an assembly hierarchy for risk modeling be applicable to all of the potential design concept alternatives.

The primary reason for using an assembly hierarchy for the hardware Risk Analysis model is that often one can closely estimate the potential technical performance for an item along with its cost budget and build schedule. Generally, physical systems are assemblies which are themselves composed of subassemblies, which are made up of even smaller subassemblies, or parts, and so on, each of which has a different technical, schedule and cost risk assessment.

Extending problem scope to encompass technical, cost, and schedule risks entails changing the acquisition concept for a system. Using an automobile as a familiar item, think of each candidate system as delivered either to you or to some assembly facility in piecemeal fashion as a "kit". This is how today's "foreign" and domestic manufacturers build automobiles, with an international cast of subassemblies. To realistically complicate the scenario, imagine that some subassemblies employ untried concepts or extend an unproven technology.

That scenario implies confronting complex arrays of decision variables. Moreover, attribute scores are made up of varying estimates for life cycle cost, delivery schedule, and technical performance for each candidate, which is the case in the real world.

Assume that several alternative designs are nearly equal in reported performance estimates but differ in their probability of meeting defined requirements for system technical performance, scheduled delivery or provided cost constraints. Then, the risk analysis will compare the available alternatives solely on the basis of their differences in assessed project-related risks.

Risk assessments should be made on a single specific, identifiable version of an item. Failure to identify a system design baseline description for the item is asking for serious errors in the analysis.

Return to Table of Contents


Linguistic Variables for Risk Assessment

Fuzzy set theory provides a means to quantify linguistic variables, which are the natural language words and expressions or qualitative terms used for evaluation of the elements in risk analyses. The set of qualitative risk assessment terms should have generally understood meanings within the field or industry requiring the analyses. Within risk assessment, primary terms are Low, Medium, and High. Secondary terms are modifiers for the primary terms.

A Graph with one centered normal curve encompassing the 
range, to illustrate the concept of medium that is central to risk 
assessments.

Figure 1

To illustrate "fuzziness" of the primary terms used in risk assessment, each primary term can be represented by the Gaussian or Normal distribution. As shown in Figure 1, the concept for the word Medium does not mean just the imaginary line dividing a range in half, but the area enclosed by a bell-shaped curve centered on that line.

The word Low is generally understood to be an area at the left (lesser, by convention) end of the range. See Figure 2. The word High is represented by the same area at the right end of the range.

A Graph with an upside-down normal curve to show the 
concepts of low and high with an excluded middle (medium).

Figure 2

Note that Low and High are drawn with convex closure in Figure 2 to convey a sense of including the extremes. Observe also that the sloping side of Low is equivalent to shifting the normal curve from central Medium in a negative (left) direction to the position of the curve in Figure 2. High is a positive (rightward) shift to the mirrored position. That position shift illustrates the concept for the more intuitive alternate operation for the hedge very from "fuzzy logic" that applies a shift constant [Schmucker (Returns to here_4.)].

A graph with a solid line normal curve and dashed line 
curves on either side that show a negative and a positive shift of the original 
curve.

Figure 3

Figure 3 displays the shift constant concept with two curves displaced the same amount in opposite directions. Extending that concept changes the shift constant to a variable whose shift is proportional to the value defined for each of the primary terms and their qualifiers or hedge terms.

Risk now can be assessed with an intuitively believable range from "Extremely Low" to ":Extremely High" with a shift variable whose value is in accordance with the qualifying terms as linguistic variables. Evaluations of risk do not depend on artificially precise numbers for estimates of probability within the range from .001 to .999 (±3σ from the mean).

Evaluative risk assessment statements are obtained by polling people, expert and otherwise, for their opinions. The resulting shift quantity can be given some precision by predefined adjustment for each qualifying hedge to the primary linguistic variables Low, Medium and High for the estimates of likelihood and severity of effect. Such hedges are "fairly," "higher (than)," "relatively," "lower (than)," "not," "pretty," "reasonably," "really," "slightly," "somewhat," "sort of," the previously stated "very," and "extremely." Each of these is a specific lower or higher shift from Medium on the utility graph. Several hedges also can be applied to the listed hedges, to incrementally adjust the shift constant value for that primary hedge, to fill the range between it and the next hedge. Other qualifying terms may be defined, of course, but a reasonably limited set of defined hedges keeps the quantification process from seeming overly contrived [Schmucker (Retruns to here_5.)].

Risk evaluators need to know the effect that terms in the hedge list will have as a shift or offset from the primary terms and as "hedges to the hedges" in the utility graph. Risk assessment expressions often used by frequently polled persons may need calibration for their placement on the graph with respect to the standard hedges. A standard list should be concurred with by primary risk evaluators for words, rules, and shift effect.

Return to Table of Contents


Risk Aspect Quantification With Utility Graphs

It long has been known that probabilities combined with utilities provide the utile (expected utility) for that combination. We cannot assign the true probabilities for risky future events, so we are stuck with estimations of possibility or plausibility along with their levels of confidence. Propagation of probability quickly becomes mathematically untenable when carried too far. In support of the described method, studies have shown that an approach with poor scaling still yields better evaluations than probability estimations [Sage/Palmer (Returns to here.)].

A powerful application for utility graphs uses the defined linguistic variables from fuzzy logic to assess project element risk. Fuzzy set theory validates use of linguistic variables for such scaling to assess risk, so it is used here only to resolve the imprecision of natural language expressions. Evaluator responses to a risk assessment questionnaire are translated with an appropriate risk aspect utility graph into their risk element quantitative scores. Risk can be assigned directly, or divided into the separate assessments of event likelihood and effect severity.

A Utility Graph with neutral straight line overlaying normal 
curves drawn for each "hedge" term surrounding Low, Medium, and High 
terms.

Figure 4

Utility scoring is inversely proportional to measured or estimated risk, to represent an evaluation of "goodness," so higher risk properly earns a lower score. See Figure 4 for a depiction of varied range shifts from the different defined primary hedge terms. A balanced set of hedge terms for Low and High on each side of Medium makes the ordinal scale input points on the utility graph. The straight line provides the translation from linguistic variables to risk utility score range. The utility line is from the 100 score "Extremely Low" to zero score "Extremely High" risk levels.

Primary risk assessment and hedge terms are tall normal distributions about their central points with 3σ at each adjacent term's central point on the graph. The hedge "extremely" defines a maximum shift of Low and High primary terms to where their new central values on the graph are zero risk and certain risk, respectively. The hedge "very" provides less shift in the same direction such that the farthest 3σ value of the bell curve is just within the bounds of the graph. The shifts occurring from the other defined hedges are similarly imposed to accommodate ordinary industrial discourse.

The assumption: Slight non-linearities in risk preference do not radically alter the relationship of linguistic variables to expected utility.

Evaluator expressions with ranges should be narrow or the system level assessment must be expressed in a very broad range. Combinations of all worst and all best risk scores from the broad element input ranges to obtain are summed to obtain a system score range. Central values determine the "most likely" system score. The output range for a Medium risk assessment, shown in Figure 4 with dashed lines to either side of 50, maintains the concept of fuzziness or uncertainty of quantification.

The utility graph method is used in an identical manner to score uncertainty for each element in the risk model. The graph abcissa is the ±adjustment (i.e., 0 to 10) to the central score for establishing the worst/best element values. Severity and likelihood scores for each element are combined with weighted addition. Unless there is a compelling argument for a different bias, recommended weighting is 0.5 for each. The weighted utility score provides the essential figure of merit representing the risk assessed for each model element.

Risk aspect utility curves are reused in a Risk Model to provide consistency of evaluation. An unique risk aspect utility curve is required only if decision maker attitude changes radically with the specific element.

Return to Table of Contents


Incorporating Psychological Bias

Risk evaluation for business decision-making has addressed Producer's and Consumer's risk, which is not relevant to this paper, and measuring psychological propensity toward, indifference to, or aversion to taking of risks (as in gambling). The differing behavior of individuals with regard to each aspect of risk for the system at issue has a significant relationship to project risk assessment.

When the potential effect of a change in risk level is large, the curve slope should be steep. Therefore, shape of risk utility curves may incorporate the personal or organizational psychological tendency to be risk-seeking or risk avoiding for each aspect. The specific behavior is subject/situation dependent, but a general description is that a risk-seeker acts optimistically, without the need for as much security (information and control) as does a more pessimistic, risk-averse person in a similar set of circumstances.

A Utility Graph with two curves. Translation of linguistic 
variables to scoring by one has an above neutral shape for a risk-seeking bias 
and by the other has a risk-averse bias.

Figure 5

A risk-neutral utility graph has the utility curve drawn as a straight line from 100 scoring at zero risk to zero at certainty, as shown in Figure 4. See Figure 5 for an illustration of example non-linear utility curves which may be used when the psychological bias for risk-seeker or risk-averse behavior must be modeled.

The utility curve shows a risk-seeker's behavior with a convex shape, by falling slower (than for risk-neutral) from Extremely Low to High and then quickly from High to Extremely High. The scoring for Medium risk could be 75, 80 or even higher. The other curve shows risk-averse behavior with a concave shape, by falling most rapidly from Extremely Low to Low and slowly from Low to Extremely High. The scoring for Medium risk could be 25, 20 or lower.

Foster early acceptance of final analysis conclusions (presell) by asking customer(s) to assist tailoring risk aspect curves to their organizational preferences and to sign approval lines to indicate their concurrence.

Return to Table of Contents


Establishing Element Weighting

Hardware elements in a Risk Model have a different level of technical, cost and schedule risk for each assembly. Moreover, each hardware element affects multiple risk model primitives. There will not be one-to-one correspondance of each element with some system performance parameter. This suggests weighting each of the risk model primitives in proportion to importance of the corresponding system hardware elements affecting the aspect of concern. The list of performance attributes affected by the design of each subassembly can be very large, if comprehensive.

Intuitively, basing weighting for a subassembly item on its contribution to overall system performance is better than considering only its relative importance to the parent assembly. Use the system level perspective for all such determinations. Failure modes, effects and criticality analysis (FMECA) is one system engineering approach to determining the risk of system designs with regard to mission fulfillment.

Return to Table of Contents


Summing Weighted Scores

The most difficult activity is determining an appropriate level of risk to assign to each risk aspect for each alternative subassembly design. Placement of subassembly element risk in accordance with supplied risk definitions is not difficult, except that the assessment information is often uncertain. Without knowing which of the many system issues will become critical, risk quantification definitions necessarily encompass those of historic importance to similar systems. For higher uncertainty risk assessments, the Uncertainty graph assigns a larger risk level range. This approach implements the concept that ignorance should penalize an alternative with a broader range of element and system level utility scoring.

Acquiring item risk data from two or more independent measurement processes will reduce the uncertainty in its accuracy. To use multiple estimates in a risk model, the results from the different measurement methods should be weighted in accordance with their relative accuracy for assessment of risk.

The weighted system risk score earned for a risk aspect by each alternative is added to the other system risk scores. A summation of worst, central, and best scores provides full evaluation of system level risk for each aspect. Sensitivity analysis detects critical elements and evaluates the impact of changes. Highest system utility score with least range (least effect from uncertainty) is least risky alternative.

Return to Table of Contents


Risk Aspects Interrelationships

A key concept is that "on the shelf" items (in sufficient stock for all your projected needs) are measurable, so project-related risk is low for that element. Items which have been built, but are "out of stock" for the required quantities, may have low technical risk but plenty of schedule risk and/or cost risk.

Design changes to reduce technical risk affect schedule and cost. The first items produced to a tight schedule may be deficient in technical performance. Requiring "state of the art" (or worse, an advance in the art) in system technical performance can seriously impact both cost and delivery schedule. This has led to Rapid Development, an incremental approach where low risk project objectives are met first. Extensibility for the next project increment is designed in, so injection of the next level of functionality or performance is eased by experience and knowledge gained during the previous increment. Overall program risk then can be lowered.

Return to Table of Contents


Finding Candidate Technical Risk

Technical risk includes the risk due to nature, where violation of fundamental physical limits may be required to succeed, and risk due to technology for implementing the requirements, where a large advance in the state of the art is required for success. Typical accuracy of technical performance estimates is relatively high, at 70 percent.

High technical risk comes from such system characteristics as:
Customer confuses policy, mission, and engineering goals
Extreme system size or complexity
System requires socio-political policy goals
Verification has less project emphasis than design
Design team inexperienced with manufacturing

Medium technical risk is assigned when:
Market is new but similar to team experience
System complexity is moderate for the technology
Project Engineer is highest level decision-maker
Practicing Engineer(s) manage effort and make project decisions,
along with design tasks

Low technical risk criteria include:
Organization has strong Concurrent Engineering, System Engineering,
or CIP/TQM practices
Low system complexity for existing technology
System uses only proven components and materials
Design is Pre-Planned Product Improvement driven
Low Risk alternates don't reduce performance

Risk assessment lists should contain an equal number of equivalent importance definitions for each risk level, tailored to the industry and to importance to the system within planned environment. The identification and evaluation of technical risk is crucial to successful Preliminary Design Review, an important decision point in complex military system design and development projects.

Return to Table of Contents


Finding Candidate Schedule Risk

Assessing schedule risk is quite similar to determining candidate technical risk, with a caveat that probability of accurate schedule estimates usually is only around 15 percent. (Schedule reports are always timely--only hardware and/or software gets behind schedule for delivery.)

Investigating supplier status normally is essential. Are required manufacturing facilities, equipment, processes, tools, components, supplies, services, skills, and design information still in place, unchanged? Is the company financially solvent? Is it over-committed to another order? Many issues can bear on the timely fulfillment of critical component orders.

The time required to develop a complex system has not changed much in several decades, and is least amenable to improvement, but that is where the effort to reduce an estimated schedule will be applied by management. (Most workers will go along, to avoid being labelled as something other than "success oriented," although the failure to deliver on time can bring about more severe penalties than a bad reputation.) The same assessment procedure applies as for technical risk, but uses a schedule-specific set of risk level definitions.

Return to Table of Contents


Finding Candidate Cost Risk

Cost data is supposed to be the least subjective, because it usually is most available. Nevertheless, the only project area with an even lower likelihood of accuracy than scheduling is cost estimating, at 10 percent. Appropriate cost risk level definitions are applied similarly to the procedure for technical risk assessment.

Return to Table of Contents


Getting Risk Assessment Data

The criteria list for determining high, medium, and low risk may be put into a programmed checklist for a structured interview of evaluators such as designers and other experts. The checklists are a memory jogger as well as a means to maximize information extraction from that normally reticent data source, engineers. Risk evaluators should have a broad perspective of the total program, similar to that needed by project team system engineers. One or more of the risk aspect utility curves may need tailoring for a different psychological bias following replacement of program or project decision makers who approved the original set.

Return to Table of Contents


System Level Risk Assessment

Describing the net score result in natural language terms is relatively easy with the utility graph approach. A risk-neutral utility graph with system risk utility score from application of the other risk aspect graphs locates a point on the input scale. The former ordinal input (linguistic variables) scale from Figure 4 becomes the new output in hedges or qualitative natural language terms. See Figure 6 for the inverse conversion graph.

A Reversing Utility Graph that converts risk level score 
into natural language in range from Extremely Low through Medium to Extremely 
High.

Figure 6

If output does not fall near a listed point on the scale, find a hedge from the expanded list of qualifiers which would describe the risk level nearest the output point and use that qualifier with its primary term in the output qualitative assessment language. The risk-neutral graph is specified to avoid adding or removing any additional risk-seeking or risk-averse bias to that used during the original risk element assessment scoring.

Return to Table of Contents


Summary

The method described for assessing project risk is simple, reliable, and practical for use by anyone with moderate arithmetic skill and is thereby an advance in the art. It provides an aggregate risk figure of merit for each project variant by adapting a Work Breakdown Structure to risk modeling. The utility graph approach to evaluating assessments for quantification of risk levels applies linguistic variables to assign the numeric value for each risk element. It simultaneously takes individual or corporate psychological attitude regarding risk aspects (risk-seeking through risk-neutral to risk-averse) into account, if desired.

The outlined approach retains the System Engineering perspective for determining the relative importance of system risk elements. The risk modeling and linguistic variable approach to individual risk element assessments provides higher confidence in accuracy and objectivity than is attainable for traditional risk evaluations.

One can seamlessly integrate technical risk, schedule risk, and cost risk aspects into project decision analyses in an auditable fashion. Intermediate process judgments are clearly revealed by the lower level decision model weighted utility scores.

The system risk assessment score is based on weighted summation so it can be converted to the representative qualitative level expression with a neutral bias inverting graph. Risk utility score range figures of merit input translates into the linguistic variable(s) or qualitative output range.

Return to Table of Contents


References:

Lewis, H. W., Technological Risk, W. W. Norton, New York, 1990.
Return to citing paragraph.

Saaty, Thomas L., The Analytic Hierarchy Process, McGraw-Hill, New York, 1980.
Return to first citing paragraph.
Return to second citing paragraph.
Return to third citing paragraph.

Sage, Andrew P. & Palmer, James D., Software Systems Engineering, John Wiley & Sons, New York, 1990.
Return to citing paragraph.

Schmucker, Kurt J., Fuzzy Sets, Natural Language Computations, and Risk Analysis, Computer Science Press, Rockville, Maryland, 1984.
Return to first citing paragraph.
Return to second citing paragraph.
Return to third citing paragraph.
Return to fourth citing paragraph.
Return to fifth citing paragraph.

If you have any questions on this tutorial subject, please contact the author as listed below. A response will occur and a FAQs section may result.

Return to Table of Contents


This tutorial is presented by:

The ODDSCO Co. Logo (A stylized duck).

Optants Documented Decision Support Co.
(ODDSCO)
297 Casitas Bulevar
Los Gatos, CA 95032-1119
(408) 379-6448 FAX: (Same, by arrangement)



www.optants.com


E-mail:
Tutorial Author: jonesjh@optants.com
Other subjects: cobsult@optants.com