ODDSCO's Tutorial for a Simple, Robust Decision Making Process

An introductory tutorial about a robust but simple to use Integrated Product Team (IPT) supporting Decision Process, from the practitioner's perspective.

In essence, this tutorial is the Author's paper entitled Developing a Single Performance Indicator From Hierarchical Metrics that was peer reviewed and published in the 1995 National Council on Systems Engineering (NCOSE) Symposium Proceedings. (In that year, NCOSE became INCOSE, for International.)

An Integrated Product Team (IPT) Supporting Decision Process

Abstract. A contemporary System Engineering task is development of product/process performance metrics. Executive management desires a single performance indicator that combines subsidiary metrics into a meaningful summary representation of organization health with respect to its mission. A purely mathematical approach can produce such an indicator, but when constituent metrics also contain subsidiary metrics, a multiple stage method is required. Because not all metric result customers are fully competent to audit the result, this paper sets forth a generally understandable method that "presells" the output to all involved parties through their approval of each model base before alternatives performance data are entered. The eminent decision support capability inherent in this method suits it for complex tradeoff analyses and Capability Maturity Modeling.

At the end of this tutorial is an opportunity to pose question(s) on this subject.



Return to Home Page [Use a Return Link after any of the listed subsections to quickly reach this option.]


When planning for the verification of product requirements, system engineers become familiar with selecting direct quantitative measures. Often, two or more of these combine to form a metric (a figure of merit that indicates worthiness). For example, the automobile metric "range" involves subsidiary metrics fuel tank capacity and miles per gallon. MPG is based on average fuel flow at defined speed with atmospheric and other conditions held constant. When the impinging factors cannot be held constant, many correction factors measurements become necessary. Simple metrics can be extremely complex when examined.

Increasing emphasis on continuous improvement of process, as well as product, requires ever increasing employment of qualitative measures as the constituent elements of their definable metrics. But, mixing of subjective with objective measures (apples and oranges comparison, so to speak) requires their conversion into a common currency (fruit?). That is why cost/benefit analyses convert the benefits into cost-like numerics. Non-numeric values must be quantified for combining with numeric values to establish viable process metrics. Subsidiary qualitative metrics also must become numeric quantities, for combining into a higher level metric.

Return to Table of Contents

A Single Performance Indicator

One paper [Delozier and Snyder (Returns here.)] claims that a metric for total organization performance must proportionally include independent factors of Quality (Q), Schedule (S), and Cost (C) indexes relating measured performance to expectations derived from a set of benchmarks. That paper sets forth a single indicator Total Satisfaction Rating (TSR) with the following multiplicative, scaling adjusted (A), and exponent weighted formula:

TSR = A x Qa x Sb x Cg

Factors weighting is determined by customer surveys of their relative importance. Customer rating for each element at its calculated total rating on a 1 (dissatisfied) to 6 (extremely satisfied) ordinal scale is plotted for a significant number of projects and a linear "best fit" line overlaid. Its angle of slope displays aggregate level of customer sensitivity to changes in the Quality, Schedule, and Cost factors for prioritizing improvement efforts.

Return to Table of Contents

Combined Metrics Criteria

The foregoing method is a complicated, "quant jock" approach to sensitivity analysis. A better method to develop a single figure of merit for general business and/or executive information decision makers would have the following features:

The metric development method must allow customer tailoring of combined metrics models. If not, regardless of the cost or choice of "operator," the model won't effectively assist organization performance improvement efforts and another method should replace it.

A development method should assist cognition (of everyone involved) if it is to result in the consensus selection of a "best" indicator from potential solutions. A method for development of process metrics should be easily computerized, be understandable by all users from naive to technically astute, be usable anywhere, be supportive of combined judgments, and be logical in approach from start to finish. Users should know what, why, when, and how to perform each method step, for adequate effectiveness.

Combined metric quality arises from appropriate and consistent treatment of every subsidiary element with regard to preferences, data uncertainty, depth of research, performance data evaluation, and approach to justification. Method structure and methods should be compatible with normal tasks, organizational structures, and company policies.

Not all customers of a performance indicator perform each method step, although someone must. Rather, all parties should have sufficient method understanding to either control or contribute to the metric development and to evaluate the results without specialized training and experience. The method should be neither complex nor hidden from users and should be perceived as both complete and supportive while used during meetings and by individuals, without requiring mathematics support from technical people.

The method should allow custom tailoring of models to fulfill metric goals. With flexibility of solution logic as well as variable input data forms, "What if ... ?" questions are answered quickly by the model.

Return to Table of Contents

The Combined Metric as a System

System design is the scientific art of specifying the system functional requirements such that detail designers can implement a system that fulfills its purposes. To design a useful metric, simply consider yourself to be its system designer. Fortunately, that is easier than you may think. For many types of metrics, appropriate measures exist. You need only adapt or tailor the most suitable ones to your specific problems rather than start from scratch.

Return to Table of Contents


A few simple ideas underly the steps of a systematic, effective method for developing single performance indicators. But what is effectiveness, in this context? Process effectiveness is a measure of how well you (or your group) identified, specified, accomplished, and verified fulfillment of organizational purposes after performing the defined set of procedures. It measures how well you did, compared to what you should have done (assuming validated requirements).

Method efficiency, on the other hand, is closely related to how much time you must spend on all of the various procedural inputs to obtain a satisfactory output of required results.

The combined metric development method described herein is prescriptive, to help you and those you will counsel to build better performance metrics.

Return to Table of Contents

The Underlying Method

Foundation for the decision method described in this paper is Multiattribute Utility Analysis (MAUA), which is a terrific tool for coaxing a voluntary consensus despite any initial disagreement during the group interaction. Strongly held views tend to be centered on particular issues, but MAUA requires that all issues be considered by all participants [Yu (Returns here)]. Judgments about the importance of any specific metric are separate from judgments about its relationship to the organization's mission. Agreements made regarding the non-controversial indicators reduce disagreement to a few issues of conflict. Compromise becomes possible and consensus is nearer.

Back in 1981, this author worked with MAUA. FMC proposed a tracked vehicle for an USMC weapon system design concept study. The government-provided system evaluation model was hierarchical and modular. It used utility graphs for qualitative as well as quantitative metrics. It summed coefficient weighted utility scores to obtain a single figure of merit for evaluating the competing systems against a hypothetical ideal system. Each designer thereby learned what effect any design attribute would have on system performance and the resulting utility score.

MAUA is not fully described here, because adequate theoretical treatment is too lengthy, but those portions important to the described method have been retained.

The proprietary monicker for the integrated ensemble of tradeoff study method steps is Decision Support Incorporating Documented Evaluations (DSIDE(tm)), to be pronounced as "decide") method. Usefulness, clarity, and extreme simplicity were the primary objectives for developing the method whose application to developing a single performance indicator is next set forth.

Return to Table of Contents


The few simple steps in the DSIDE(tm) method are:

A. Define the purpose
B. Partition problem into model structure
C. Make utility graphs for model primitives
D. Weight the model elements
E. Sum weighted scores for measurement results
F. Document the method and its conclusions

Although "steps" are listed for a complete trade study method (useful for the combined metrics development), blind rule-following is neither implied nor recommended.

You will note general similarity between the listed steps and methods you have seen before. The major differences are critical enhancements that fulfill the combined metrics criteria listed on the first section.

Step A initiates "preselling" final results. The second big contributor (step B) is the understandably structuring the problem. Another "preselling" contributor is the explicit approval (and retention) of each method product developed during steps A through D. When required approvals empower those with relevant knowledge, total method time is minimized. The next subsections of this paper provide concise overviews of the listed DSIDE(tm) method steps as used to develop a single performance indicator from subsidiary metrics.

Return to Table of Contents

Step A: Define the Purpose

This step resembles System Engineering requirements analysis in a system development project where you must fully understand a problem before you can solve it. Identify the purpose to effectively apply your efforts toward solving the "right" problem. Until you establish the purposes of your study, you can't know which criteria are important or what priority (how much weight) to set for each.

Purpose definition is in terms of goals and subsidiary parametric criteria. Attainment of goals is measured by fulfillment of the defined criteria. This first step, then, consists of listing the goals to fulfill the purposes and their constituent criteria, justifying them sufficiently to obtain consensus among those involved in model design. Approval of the defined purpose list achieves initial results preselling.

Return to Table of Contents

Step B: Partition Problem Into Model Structure

The model framework, the structure of the hierarchy of the listed goals and criteria, is the output of this step. To form the model structure, use a top-down method to classify the listed goals and their criteria into categories for easy comprehension of its parametric elements.

Starting with the problem's overall purpose, you know it has goals which can be classified into separate categories for grouping parametric criteria. Next, divide the categorized first level goals into their subsidiary subgoals and/or criteria and determine interrelationships. Work down from each less certain, fairly general, parent goals to the next level of subgoals. Continue division until each branch ends with one definite, primitive (undividable) criterion as the leaf. As will be shown in the next step, the criterion may be quantitative or qualitative. The branching hierarchy you create in this way should resemble an inverted tree structure.

The preferred Model hierarchy is the least complex structure which encompasses all significant criteria.

The structuring method improves upon simply listing problem criteria because it divides a complex problem into manageable components which facilitates the next two method steps of scoring and weighting as well as general comprehension of the overall model and an earlier attainment of group consensus.

In the TSR metric example, goals and their criteria are parsed into Quality, Schedule, or Cost categories. Their criteria are collected as child elements. Each goal or subgoal can have its own subelements, to whatever depth complexity of the problem requires, ending each branch with a primitive criterion. Expressing the model hierarchy numerically, the hypothetically combined TSR metric example elements are:

1.0  Quality Index (Q)       [Goal]
  1.1  Q Metric 1              [Criterion]
  1.2  Q Metric 2              [Subgoal]
     1.2.1  Q SubMetric 2.1       [Criterion]
     1.2.2  Q SubMetric 2.2       [Criterion]
  1.3  Q Metric 3              [Criterion]
2.0  Schedule Index (S)      [Goal]
  2.1  S Metric 1              [Criterion]
  2.2  S Metric 2              [Criterion]
  2.3  S Metric 3              [Criterion]
3.0  Cost Index (C)          [Goal]
  3.1  C Metric 1              [Criterion]
  3.2  C Metric 2              [Criterion]
  3.3  C Metric 3              [Criterion]
  3.4  C Metric 4              [Criterion]

The model building method is iterative, with choice of criteria influencing likelihood of feasible results. Upon approval of a metric model hierarchy, the second basis for "preselling" results is established.

Return to Table of Contents

Step C: Make Utility Graphs for Model Primitives

A utility graph is a two-axis scale for scoring the utility (worthiness of an option) for a criterion. The vertical axis is an absolute scale for utility score output, with a range from zero to maximum value. The horizontal axis is the criterion performance range. Utility curve span is within performance range and includes at least one zero and one maximum utility value. Curve shape is drawn from a rationale for scoring and should accompany the graph for use. For Figure 1, an example qualitative criterion, "Ugly" has charm and "Plain" is unforgivable, but "Wow!" attracts both vandalism and highway patrol attention.) Utility score is the output value that aligns with the curve intersection directly above the position for attained performance, as shown by the dashed line.

Utility scoring graph with 0 to 100 vertical axis and performance range 
from Ugly to Turns All Heads horizontal performance range.

Figure 1

For each hierarchy criterion, construct an appropriate utility graph. Zero score is the minimum acceptable performance with respect to the ultimate purpose. This method of scoring works well with both qualitative and quantitative performance attributes.

The validity of quantifying qualitative criteria with utility graphs is established by its equivalence to fuzzy logic definition of linguistic variables [Schmucker (Returns here)] for ordinal positions on the utility graph input range [Jones 1994 NCOSE (Returns here)]. The output utility scores for qualitative performance factors can be fixed or be given a range of uncertainty to use within best/worst case and Monte Carlo analyses.

Utility graphs clearly communicate the assumptions and relative values, just as the model's structure facilitates a common understanding of the single performance indicator or study purpose.

Mathematical formulas easily accomplish the scoring with quantitative data and translation tables can convert qualitative values into numbers, but neither conveys the information as clearly as utility graphs. Remember, the primary customers for the results must understand the means used to obtain the final figure of merit. When they have approved a utility graph for each criterion, the third method output which assists "preselling" the final result is established.

Return to Table of Contents

The Ideal Standard

Another benefit from using utility graphs for option scoring in hierarchical models is the automatic development of an "ideal" standard for use in tradeoff analyses. The theoretical option that scores maximum for every utility graph is, by definition, the ideal solution to the problem. The ideal solution ignores real-world performance tradeoffs while fulfilling the entire wish list. Impossible to implement, it still provides a fixed standard for evaluating and ranking all real-world and proposed options. An "ideal" standard for comparison of options improves on a pure relative comparison approach to picking a best solution from many options. Independently comparing each option to the ideal removes the possibility of ranking changes which sometimes occur when an option is added to or removed from studies that compare their results relative to each other.

Return to Table of Contents

Step D: Weight the Model Elements

The hierarchy could be used as is, but each child element to a given parent currently is equal in importance to its siblings. This is unlikely for real-world elements, so explicit weighting to set the relative priorities of each element is the next step in assessment of options. The following manual methods are proposed for use when computer tools are not available:

Return to Table of Contents

Direct Weighting

Direct assignment of weighting is performed by assigning the total figure of merit value as 1.0. Assign to each top level goal category the decimal value representing its weight (relative priority) which will sum to one when combined with the weights assigned to the other categories. Next, assign relative weighting to the subelements for each goal category in the same fashion. Continue descending through the hierarchy levels until every criterion has been assigned its weight value. Explicit priority is now established for each element, but a rationale for the weighting must be supplied or auditability is lacking. Without argument for the priority values or their differences, justification for the assigned weighting is weak at best.

Return to Table of Contents

Algebraic Weighting

An alternative to direct weighting that better supports auditing the priority assignments is algebraic computation of simultaneous equations. Consider weighting a three element group from an algebraic perspective. Compare elements in pairs and assign their fractional relationships from 1/1 for equality to any appropriate ratio. The relationship of three pairs is assigned and the formulas solved with substitution in the others. The solution set then is normalized through division of each by its sum. The need to sum to 1 can be one formula. For instance, if element 2 (E2) = 4/5 E3 and E1 = 2/3 E2, the weighting relationship simultaneous equations are:

E1 + E2 + E3 = 1.0,
E2 = 4/5 x E3, and
E1 = 2/3 x E2, which leads to:
  (0.6667 x E2) + E2 + E3 = 1.0, or
  0.6667 x (0.8 x E3) + (0.8 x E3) + E3 = 1.0, or
  (0.5334 x E3) + (0.8 x E3) + E3 = 1.0, or
  2.3334 x E3 = 1.0, so
  E3 = 1.0/2.3334 = 0.4285 and
  E2 = 0.8 x 0.4285 = 0.3429 and
  E1 = 0.6667 x 0.3429 = 0.2286.

An algebraic approach is recommended for weighting two and three element groups. As number of elements to be weighted increases, the difficulty (tedium) multiplies along with the number and length of the simultaneous equation formulas.

Following weighting, the model is complete for use as shown below. Further, listing the maximum possible contribution for each element to the TSR as multiplied weights at the ideal standard maximum utility score of 100 for every criterion provides automatic sensitivity analysis. For example, as shown for element 1.1 (the first criterion) in the Maximum Utility column, 12.3 = 100 x .229 x .537. The largest Maximum Utility values provide the greatest benefit from an improvement effort.

                                                            Element    Maximum
                                                            Weight      Utility
1.0  Quality Index (Q)        .537
  1.1  Q Metric 1             .229    12.3
  1.2  Q Metric 2             .327
     1.2.1  Q SubMetric 2.1   .631    11.08
     1.2.2  Q SubMetric 2.1   .369     6.48
  1.3  Q Metric 3             .444    23.84
2.0  Schedule Index (S)       .364
  2.1  S Metric 1             .179     6.52
  2.2  S Metric 2             .439    15.98
  2.3  S Metric 3             .384    13.98
3.0  Cost Index (C)           .099
  3.1  C Metric 1             .313     3.1
  3.2  C Metric 2             .222     2.2
  3.3  C Metric 3             .306     3.03
  3.4  C Metric 4             .159     1.57

When primary customers have approved weighting for all groups of model elements, the fourth and last decision basis which helps to "presell" the result is established. You now can gather the performance data, input it to the model, and determine the results.

Still, even with the pairing relationship assignments recorded for their justification in the algebraic weighting method, consistency is a remaining issue for discussion. Consistency of weighting is a measure of the nearness of implied relationships to explicitly assigned element relationships. Achieving it becomes increasingly less manageable as the number of group elements increases. Accurate weighting of five or more model elements with good consistency requires mental ability few among us can claim.

Return to Table of Contents

An Inconsistent Example

Paired comparisons input for a group of more than two elements forms an implied group which can be inconsistent with the input. As an example, if pair (1,2) element 1 (E1) is more important by 9/1 and for pair (1,3) E1 is more important by 7/1, then the implied relationship for pair (2,3) is E3 is more important by 9/7. Anything else asserted as the relationship reduces the weighting consistency in proportion to the difference, such as element 3 is more important with 5 for priority level. Claiming reverse of the implied relationship further increases the inconsistency. This example of inconsistency is used in the next section to illustrate its detection and correction.

Return to Table of Contents

The Weighting Template

A tool which provides auditable and highly consistent element weighting is supplied by ODDSCO on floppy diskettes for the IBM PC and Apple MacIntosh computers. A worksheet template for Lotus 1-2-3® and Excel® spreadsheet programs applies the Analytic Hierarchy Process (AHP) [Saaty (Returns here)] to convert paired comparisons for up to nine sibling model elements into an objective weighting set [Note 1 (Returns here)]. Its unparalleled advantage is simplification of the difficult, subjective weighting task while indicating level of attained consistency.

To use the weighting template, one merely inputs the relationships of compared element pairs and it calculates the resulting set of weights. Lists of element pairings are questionnaires for gathering the input relationships. An Input Form lists importance level definitions and records the relationship of each pairing for a set of up to nine elements. The relationships assignments are mentally "natural" (a scale of 1 to 10) so they are easily comprehended by the result customers. The relationship record indicates which member of each pair is most important and by how much, just as fractions are recorded in the algebraic weighting method.

Input Forms (especially with approval signatures) are useful historical records. They can be presented during review meetings to continue "preselling" the results.

Figure 2 is a typical weighting template screen which has relatively consistent input entries for the maximum set of nine elements. Matrix data input and weighting output both occur on the same screen, as shown. The input pairings begin with comparison of elements 1 and 2 (labeled C 1,2: in cell column A row 2 or A2) with element 1 assigned as most important in cell B2 and by how much as 2/1 in cell C2. Cells A3 through A9 label the remaining element 1 relationships in columns B and C. Input for the other labeled pairings is similar. For pairing C 8,9:, labeled in cell J15, element 8 is most important at 3/1 rating.

0003C 1,3:    1     3 C 2,3:    2     2  MUCH" REPLACING 1 IN NEXT COLUMN.  
0004C 1,4:    1     1 C 2,4:    4     2 C 3,4:    4     3                  
0005C 1,5:    5     2 C 2,5:    5     3 C 3,5:    5     5 C 4,5:    5     3
0006C 1,6:    6     4 C 2,6:    6     5 C 3,6:    6     7 C 4,6:    6     5
0007C 1,7:    7     3 C 2,7:    7     4 C 3,7:    7     6 C 4,7:    7     4
0008C 1,8:    8     1 C 2,8:    8     2 C 3,8:    8     3 C 4,8:    4     1
0009:C 1,9:    1     3 C 2,9:    2     2 C 3,9:    3     1 C 4,9:    4     3
0010    1     1     1     1     1     1     1     1     1     9     1      
0012C 5,6:    6     3  PARAMETERS OR HIERARCHY SUBELEMENTS:                
0013C 5,7:    7     2 C 6,7:    6     2                                    
0014C 5,8:    5     3 C 6,8:    6     4 C 7,8:    7     4             ######
0015C 5,9:    5     5 C 6,9:    6     7 C 7,9:    7     6 C 8,9:    8     3
0018         #1    #2    #3    #4    #5    #6    #7    #8    #9  C.I.  LIM:
0019      0.078 0.048 0.029 0.071  0.15 0.304 0.219 0.073 0.029 0.025 0.125
0020       0.08 0.049  0.03 0.063 0.142 0.309 0.226 0.075 0.025  SIMULT-EQU

Figure 2

The input importance level is a ratio scale to retain essential proportionality after normalization. The input rating assignment of 1/1 means equality of contribution to the parent, that each element is as good as the other. Assignment of 9.99/1 is an assertion that most important element of the pair is highly dominant and contributes extremely more value or worthiness to the parent.

When all the relationships have been entered for the group being weighted, initiate recalculation and the two bottom rows (19 and 20) fill with result values for the element numbers in row 18. Row 19 is the weighting set from the AHP computation, while row 20 is the real "bottom line" or true simultaneous equation solution for the input element relationships. These are the weights to use in the model [Note 2 (Returns here)]. Both weighting sets are shown to provide feedback on effect of revisions to the input relationships when consistency is deficient as indicated by the AHP algorithm [Note 3 (Returns here)].

In cell K19 of Figure 2 template screen is the computed Consistency Index (C.I.) and in cell L19 an associated Limit (LIM:) whose value is related to the number of elements being weighted [Note 4 (Returns here)]. It is desirable to revise input and recalculate to bring the C.I. below LIM: because consistency affects reliability of weighting and thereby the output result.

The consistency testing feature helps guard against input of clerical errors which might otherwise stay undetected. It also prevents a display of obvious ignorance regarding one or more of the study elements. It certainly helps bring about earlier group consensus via discussion of compromises needed to obtain consistency prior to the synthesized weighting approval.

Weight two or three elements with direct assignment or an algebraic method, unless you input decimal numbers at places between the defined whole numbers for importance levels because output set is small with whole numbers importance level input.

Following groups weighting, transfer the weights into the derived model to await input of the primitive elements utility scores.

Return to Table of Contents

Step E: Sum Weighted Scores

Once every primitive element has a measured (or estimated) attainment for an option, the utility curve translations provide the earned scores (numbers between 0 and 100). Multiply each score by its assigned weighting and add that to the weighted scores from all neighboring children to the same parent element (its siblings).

Summing the weighted scores for all the children of each parent element provides its utility value, which is another score for weighting and adding with weighted scores for the other children at the next parent level in the hierarchy. Sometimes you must descend another branch to obtain the weighted score for summation with its siblings.

When all criteria utility scores are weighted and summed to the top of the model for an option, the final result is a single, easily interpreted scalar value (from 0 to 100) which indicates the purpose level performance for that option. Repeat for each option and rank them by their net utility scores to find the best solution.

When all decision bases produced by the DSIDE(tm) method are in force and unassailable, agreement with and acceptance of model results is virtually automatic. No customer approval is required here, because this step merely applies data to the approved model.

Of course, the model result customer(s) may insist on evidence of option data validity for each input to the utility graphs or metric formulas. Review of the option performance data for accuracy and validity is prudent in any case. Then, the "presold" result should be accepted without argument from the model customers because all areas of potential contention have been addressed and approved.

Return to Table of Contents

Step F: Document the Method and Its Conclusions

Typically, an analysis Report is required. Assemble records of the actions and decisions involved in performing method steps A through E and develop an understandable description of the method and model.

Make the report concise, but with audit trail of method steps to support future investigation of result and model validity. Clearly show each step, so others can follow the sequence.

When performing the method for others, as an expert or consultant, you want the fruits of your labor used. Therefore, an appropriate report has complete exposition of model design along with rationale and methods used for setting weights, designing utility graphs, and getting usable performance data. When a customer or assigned experts or customer representatives helped, say so. The objective is to show sufficient cause for anyone who may review the report to generally concur with its conclusions. Describe model foundation development and any economic justification approach, if employed.

Return to Table of Contents


Resultant Method Capability

The described method is an educational decision making system, capable of handling extremely complex selections while remaining adaptable to each user's differing needs because it is based on simple steps executing the System Engineering paridigm of problem solving. Much of the "education" comes from the following features:

The DSIDE(tm) method collects and organizes relevant data into a logical structure, communicates the model elements, and gains support for conclusions from group members. It doesn't manipulate you with the order of presentation and, instead, helps you to obtain insight into your trade study methods. It employs a few simple procedural steps that can be addressed separately, so an uninterrupted and lengthy duration of attention is not required for successful application. The methodology is effective because it accommodates factors upon which it often is very difficult to place numerical values.

It doesn't tell you what to think, but helps you to discover what you should think to achieve your study objectives.

Customer(s) accept results as rationally derived from a method specifically tailored to solving their problem.

Spreadsheet Decision Modeling. When a computer is available, installation of the weighted model in a spreadsheet template is quite simple. A temporal series plot of method net utility scores from each acquisition of the subsidiary metrics with an overlaid linear best fit line clearly shows single performance indicator trend, indicating corrective action when slope is downward.

Return to Table of Contents


Methodology Benefits Summary

Return to Table of Contents

Weighting Template Summary

Return to Table of Contents


System Engineering Trade Studies

For auditable system design concept options evaluation, the DSIDE(tm) method is recommended as a System Engineering tool. It clearly can combine system cost and project technical, schedule, and cost risk criteria in the model along with predicted technical performance to assess requirements fulfillment for each option.

Return to Table of Contents

Maturity Models

The usual maturity model approach to process evaluation requires exceeding every criterion defined for a designated level to obtain credit for its accomplishment. No credit is obtained from exceeding a criterion for a higher level until all sibling criteria are met to gate advancing to the next level. Criteria are equally weighted regardless of their individual value to fulfilling program and project objectives (beyond the fulfillment of maturity model rules).

Summarized merit with level of attainment indicated by assigned net utility score allows explicitly assigned weighting in accordance with customer priorities, rather than implicit priority that is adjustable only by revising evaluation wording. Maturity models should be tailored to anticipated project tasks. The additive model for a single performance indicator allows an increased utility score for one metric to compensate for another metric's lower or decreased score.

Return to Table of Contents


Note 1. (For the mathematicians.) Weighting template input fills a square matrix with unity diagonal whose size is equal to number of elements. Placement of the priority level value above or below the diagonal and its inverse in the reflected cell is determined by which element of pair is most important. This input method preconditions the matrix for orthogonality, which allows a simple eigenvalues computation algorithm.
After input of final values for the set of pairings and template calculation, the iterative algorithm calculates the right positive eigenvector with the largest eigenvalue and normalizes the result. Calculation and normalizing repeat until convergence to a priority ordered (weighted) solution. In the positive reciprocal matrix, the largest eigenvalue becomes a measure of the input relationships consistency whose value to obtaining group consensus has been described. (Most of us need know only that AHP works for this application and is particularly valuable for obtaining weighting consistency.)
Return to citing paragraph.

Note 2. The principal critic [Yu] of Saaty's AHP method correctly points out that its output is not equal to a simultaneous equations solution. Employing the inconsistent input example, (E1 > E2 by 9/1, E1 > E3 by 7/1, and E3 > E2 by 5/1) the template AHP method outputs weights of #1 = .7719, #2 = .0546, and #3 = .1735. Simultaneous equation solution weights are #1 = .7857, #2 = .0357, and #3 = .1786. A significant difference in the second decimal place, but is it really a problem? If the implied E3 > E2 by 9/7 is input, the C.I. = 0 with identical weighting sets. AHP as applied in the template provides clear feedback of example inconsistency, with a C.I. = .104 for LIM: = .025, so final weights which differ greatly from the exact solution are unlikely.
Return to citing paragraph.

Note 3. Many sources of potential inconsistency (in implied versus input relationships) exist when group approaches the nine elements maximum template design. With time for trial and error work to find a highly consistent input set expanding to impracticality, AHP weights alone may be considered deficient. Responding to that criticism and Note 2, the simultaneous equation solution to the element pairs relationship input also is provided as the weighting set to use in a model.
Return to citing paragraph.

Note 4. Limit rises exponentially from zero for two elements (which cannot be inconsistent) through 0.1 for seven elements (which is the psychological limit for most people during simultaneous consideration of multiple items) because inconsistency is less important than consistency by one order of magnitude [Saaty].
Return to citing paragraph.

Return to Table of Contents


Delozier, Randall, and Snyder, Neil, Engineering Performance Metrics, Proceedings of the Third Annual International Symposium, National Council on Systems Engineering (NCOSE), Arlington, Virginia, July 1993.
Return to citing paragraph.

Jones, James H., Evaluationg Project Risks With Linguistic Variables, Proceedings of the Fourth Annual International Symposium, National Council on Systems Engineering (NCOSE), San Jose, CA, August, 1994.
Return to citing paragraph.

Saaty, Thomas L., The Analytic Hierarchy Process, McGraw-Hill, New York, 1980.
Return to citing paragraph.

Schmucker, Kurt J., Fuzzy Sets, Natural Language Computations, and Risk Analysis, Computer Science Press, Rockville, Maryland, 1984.
Return to citing paragraph.

Yu, Po-Lung, Multiple-Criteria Decision Making: Concepts, Techniques, and Extensions, Plenum Press, New York, 1985.
Return to citing paragraph.

If you have any questions on this tutorial subject, please contact the author as listed below. A response will occur and a FAQs section may result.

Return to Table of Contents

This tutorial is presented by:

The ODDSCO Co. Logo<br> (A stylized duck).<br>

Optants Documented Decision Support Co.
297 Casitas Bulevar
Los Gatos, CA 95032-1119
(408) 379-6448 FAX: (Same, by arrangement)


Tutorial Author: jonesjh@optants.com
Other subjects: consult@optants.com