Abstract. This tutorial provides an extensive overview, without a
"naming of names" or detailed discussions, of the general situation
within Knowledge Support for now and for the relatively near future.
Because the general trend is for increased integration of applications for improving computer-assisted decision making, when you know something about Executive Information Systems (EIS) and the Decision Support Systems (DSS) from which they came, Artificial Intelligence (AI), Expert Systems, Neural Networks, and DSS subsystems such as Operations Research (OR) tools, you will understand where the Decision Support Incorporating Documented Evaluations (DSIDE™) process overlaps other tools and systems carrying a knowledge support banner.
Mostly as a public service, based on the likely ratio of interested visitors to
potential customers and/or clients, this free tutorial covers the subject
sufficiently to partially enable your evaluation of the value of
most available related products and services within your planned approach to
goals fulfillment. (The word "partially" was emphasized to remind you that
knowledge of material from the complete set of tutorials listed on the home
page is necessary to properly evaluate the ODDSCO offerings.)
At the end of this tutorial is an opportunity to pose question(s) on this subject.
Return to Home Page [Use a Return Link after
any of the listed subsections to quickly reach this option.]
Fully melding the human mind with the computer is still in the future, but some
amazing progress toward that goal has been made in the last few years.
Automation increasingly applies toward enabling technology, the support
of less structured, high creative content knowledge worker tasks.
Going beyond computerized networks for information handling, managers
increasingly employ powerful personal workstations with statistics packages to
support their company and corporate level decisions with information available
interactively, on demand. Expanding productivity within today's reductions in
management hierarchy requires that all knowledge workers involved in a given
problem solution effort share their information.
The ancient problem of getting the critical information to the key people in
the appropriate form at the right time still remains, although the methods of
transfer have changed. Today, however, the information is less likely to be
scarce as to be unobtainable by the persons who would apply it if they had it.
Encroachments exist, such as externally networking various local area networks
together for expanded individual interconnection with others of shared
interests, but much remains to be done.
Return to Table of Contents
A Decision Support System, as an entity, generally refers to a grouping of
integrated and unified computer programs applied to provide nearly
instantaneous, interactive, and managerial level mental processes support. It
most likely includes strong relational data base management (RDBMS) capability
for access to key company data kept on the mainframe computer. A DSS usually
supports a form of alternatives development and selection.
The DSS label originally applied to Management Information System (MIS)
extensions using some Information Technology applications that allowed for
limited control of decision modeling where system complexity exceeds the normal
human capabilities for comprehension. Now strongly associated with data
processing, MIS is used to report ongoing performance, variance from expected
conditions, and develop plans for fulfilling goals and implementing strategies.
It also should flag problems and potential areas for concern if present
operational trends continue.
Today, emphasis is more on planning for the future rather than management
control of the existing situation and on bringing more logic and structure to
the normally haphazard managerial decision-making processes. To fully utilize
the indirect utility of computer power, a DSS requires controlled access to
much of the current corporate MIS data base, along with a Management Science
related means to fully analyze that information. (Later sections of this
tutorial discuss the related tools.) Access to the company mainframe computer
may be direct or via an attached PC (personal computer). The typical DSS offers
the use of a fairly powerful internal modeling language to obtain full
The primary purpose of the DSS, that of evaluating alternatives through
formulation of Decision Models, distinguishes it from other computer-based
tools. A key point is that the DSS user need not become a computer expert
because the computer provides guidance equivalent to (or better than) that of a
skilled manager in an on-line, interactive fashion with a graphical user
A DSS should provide capability to model decision problems in terms of objectives, information acquisition, and meaningful model representation. It enables the development of routines specifically tailored to your individual requirements. It gives to you and to the more knowledgeable executives great power for determining the current health of your business and causes of ongoing trends for better or worse. Evaluations can be by product, profit center, project task force, time period, or other such useful grouping. You can insert hypothetical new causes in your models at selected simulated times and project possible effects, to verify hypotheses for almost any "What if ... ?" question you can imagine. You can relatively quickly evaluate many scenario projections and present only those causes resulting in the "best" outcomes as suggested actions to take.
The system tools act as a multiplier for your mental skills through an
integration of tools previously requiring separate application and strong
mathematical skills. While predictive models can outperform you through
consistent rules application, the DSS cannot "replace" you because
generation of complex potential solutions or alternatives requires human
The most probable application of a DSS, however, is in performing detailed
financial analyses with extensive employment of statistical tests and
projections. It also is valuable when you can apply equation-based models.
Eventually, a projected increase in capability (at moderate to great expense)
of AI-enhanced programs with virtual reality presentations may finally fulfill
the ultimate requirement of full analysis power without need for a corresponding
understanding by managers of constraints bounding routines for statistical
treatment. The program would at least warn you of the inappropriate application
of a statistical measurement technique. It would show you the reasoning chain
for its decision and persuade you with some logical explanation. With such
extensive expertise built-in, the DSS would offer valid alternative tests and
suggest use of related data base information.
Finally, because the advanced computer program could "learn" from
interaction with users, expert human knowledge should gradually be imparted to
the computer program's data base. The DSS then could propose a few suitable
"What if ... ?" variations, based on the type of analysis you were
The original concept of DSS was for fairly extensive executive mental process
and strategic decision making support, but the recent tendency has been toward
its use by middle management and staff for their day-to-day tactical decisions.
So many heavily advertised decision aiding programs with extremely limited
capability have been called DSS that the term has become worse than imprecise.
In fact, anything providing Decision Analysis support to anyone can be
considered a DSS. Its meaning has been diluted to the point where another label
is required to encompass the original concept:
Return to Table of Contents
The more senior the executive, the greater the tendency toward reliance on a
subordinate's development of DSS information and toward personal reliance on
the "soft" data of a more informal nature. Hierarchical authority is
becoming less workable than knowledge for making realistic decisions, because
ignorance effectively increases with responsibility. Agenda development,
communications, monitoring of status, interpretation of discovered trends, and
evaluating impact of significant events for dissemination of a timely
coordinated corrective action plan within a strategic perspective have become
more important at the executive levels. The top executives want to understand
the context of their domain of authority and responsibility, to take the pulse
of critical items, to obtain the same answer from any staff member asked the
same historical or current data question, and to be warned of important events
as input to their decisions for minimization of surprise. They want all this
support for what they do without wasting their own or their staff's time and
without necessity for extensive training of any who must access the data base.
Information technology is the key, even when the information is received
second-hand from staff knowledge workers. The data may be qualitative, but it
must fulfill individual requirements for content, format, frequency, and
A substantial amount of inference and intuition developed from broad experience
and knowledge facilitates quick recognition of opportunities for agenda changes
to guide subordinate action. An EIS that would thoroughly support such
unstructured thinking is a rather difficult system to construct. Cost limits
much information access to the desired information. Typically, an EIS will let
you pick a subtopic within some major topic you are studying, search out other
files containing the key words, the retrieve and present them graphically for
This is something like hypertext (a hierarchical data access approach first
popularized on the Apple Macintosh™ desktop computers as the Hypercard™
program). The stronger versions of EIS can retrieve data via an interconnected
network from multiple computer types with varying operating systems and data
The glut of information is nearly as bad as actual scarcity, when you wish to
add to your stock of knowledge, because power is derived from the intelligence
contained therein. The job of the EIS is to distill data to its essence, to its
meaning for your specific situation with respect to an applicable standard or
To that end, the EIS will help you to create ad hoc reports and apply custom
tools that perform "data mining," the specifically directed
intelligent processing of massive or frequently updated data for detection of
subtle but significant statistical relationships, as well as to accomplish
repetitive and routine work. You can look at details of choice, when you wish
to see behind the common set of summaries, because you can't always simply
explain exactly what you need. Because such tailoring of access is made easy,
redirection of middle management attention is quicker.
The net effect of a well-designed EIS is support for accomplishment of the
organizational mission. It is always there, taking the pulse, so the executive
can be elsewhere working on strategic goals. To that end, it must perform
analyses well and provide high quality output presentations. An immediate
benefit of central maintenance of the EIS data base is reduction in staff work
in user organizations.
Return to Table of Contents
The field of AI is occupied with finding ways to make machines perform tasks
in ways which would be characterized as intelligent if done by humans. Some AI
researchers attempt to model human thought processes for solving a class of
problems which give computers trouble but which humans regularly solve. Humans
learn, plan, diagnose faults, play games, and create new concepts without using
a formally defined mathematical procedure until a demand for "proof"
AI therefore emphasizes processing of symbols over numbers, combines
plausibility with standard logic while making inferences and deductions,
tolerates some errors while working with "fuzzy" concepts, employs
heuristics as well as mathematical algorithms for problem solving, and searches
associated knowledge data bases for applicable structured facts to process
while working with always ambiguous natural language. Knowledge processing is
machine reasoning or rule selection that results in apparently
purposeful, intelligent appearing behavior for problem solution.
With extensive AI, the EIS could characterize itself for the mental processes
or decision-making style of a given user and act as a higher level mentor or
consultant with helpful critiques of your work.
Even if you do not fear the concept of an intelligent machine as do many,
complete understanding of natural intelligence required before AI can properly
substitute for a knowledgeable human is not imminent. Speech understanding is
a goal of many researchers, but that is a much tougher job than natural language
processing which provides today's intelligent seeming interface with humans.
(Even humans have difficulty with accents, homonyms, and individual
idiosyncracies in speaking.)
Another major problem is having computers recall as humans do, when it works
well, because that is another poorly understood area of natural intelligence.
Neural networks (described later) are as close as it gets for now. The
successful applications are mostly in the Expert System category (described in
the next section). The best you can hope for is machines that are quite clever,
but with serious limitations in many areas.
That is because knowledge principally concerns when or how to
appropriately use the what of the exponentially increasing volume of
data, information, and intelligence obtainable in unstructured form from
information sources that are not yet machine-readable (humans).
Computers attempting to process knowledge, even where the required data is
available in electronic form, must have a well-bounded domain within which to
work. Humans are required for handling the dynamic context and uncertain
reality in which today's businesses exist. Computers can find some of the
patterns, but humans define their search requirements, determine their meaning,
make the final decisions, and must provide at least part of the interface with
The Japanese Fifth Generation Software computer language projects (since
essentially abandoned for large scale support) were attempting to find ways to
convert knowledge into wisdom as informed judgment through
inferences based on analogous situations. That particular nut appears to be too
tough to crack with brute force in a short time. True machine intelligence
requires creativity or production of better "ideas" and
solutions to problems which exceed responses and context originally programmed.
Since it may well be impossible to make computers truly wise, you probably
should avoid setting them to tasks requiring wisdom in the human sense.
Whenever you do, you abdicate the power to choose and become subservient to a
machine with inferior reasoning power. (Just as most knowledge workers in their
daily toil, eh?) You may just be replacing the old dangers of human limitations
with a new, improved mediocrity. In safety critical applications, this is
undesirable. All that wonderful technology still requires human "vision"
with experienced judgment for accomplishment of strategic objectives with its
The geometrically (some would say exponentially) increasing volume of data must
be prefiltered in accordance with some human's judgement before its placement
in the model. The AI or knowledge-based Expert Systems (discussed next) can't
solve those problems. Even that information is not sufficient, although the
temptation to build your models solely with (and to rely on the answers derived
from) a too-handy data base is substantial. This is most true when you are
under typically great schedule pressure to produce a report. Model results
would appear to be logical and consistent, concealing the poverty of
information on which you may be basing critical conclusions.
The accuracy and validity of the data must be verified, as well as its
utility for the proposed analysis purpose, before you should have confidence
in any reports.
Also, there is a large probability that relevant history will be unavailable to
your analyses. Because of storage expense, which increases with the scope and
volume (size) of the data base, there is always the problem of "selective
deposit and retention" of data. For instance, descriptions of those
alternatives not selected during previous decisions, along with
assessments of their probability of success or failure, usually are not in the
corporate data base or even archived in long-term storage.
In brief, the reality which you think you are calculating can be quite distant
from the reality your models should encompass, so accuracy and completeness of
the results always should be suspect. For example, the short term factors used
as input may be inappropriate for predicting long term results, but the output
numbers appear without warning of probable inaccuracy. The input data may be
inconsistent for various reasons related to their earlier processing. This
reality is popularly known as GIGO (garbage in, garbage out). At best, the
typical data base is likely to support only the more obvious solutions.
Archival data consists of records produced by and for others, without
forethought of your current analysis needs, so such records are of uncertain
validity. Data that could help you evaluate past decisions often will have been
misplaced, overlooked, or discarded. This is the omnipresent problem for all
forms of decision support, better known as NINO (nothing in, nothing out).
The traditional hierarchical management organizations will try to retain the
first (and sometimes only) look at critical information and to restrict its
distribution. Opportunities to design and build inadequate analyses always lurk
Return to Table of Contents
Expert Systems often are considered the most successful applications of
Artificial Intelligence concepts to computer programs, as a fairly standard
current feature is ability to explain their internal logic to users. (This
feature is used more during their development to arrange knowledge and rules,
as most people just want to know what to do next, but novices who wish to
become expert will find it informative. Advances in this explanation facility
will provide alternate explanations and analoguous related examples upon
expression of user dissatisfaction with the first explanation.)
Expert Systems apply knowledge to transform input data into more immediately
useful knowledge, which is a limited form of intelligence. They implement
either rule-based or knowledge-based approaches to "reasoning"
optimum solutions to difficult problems within their narrowly defined
In that sense, by installing a set of rules, the developer or knowledge
engineer learns what the expert(s) know and how they reason, then teaches
the Expert System what it must "know" to later guide the user. The
knowledge base will then contain both factual and heuristic
knowledge. The latter experiential knowledge is that which separates the
experts from the novices. The system rules will have weights or confidence
factors for dealing with uncertain information. For such cases, you want the
Expert System to explain its reasoning.
Professional knowledge work can be accelerated by a factor of ten or more. This
is white collar work, folks, where productivity gains are difficult and future
company competitive edges are constructed! The improvement is accomplished by
providing the information you need for the situation you are in immediately,
without your involvement in the search. Planning and other complex jobs that
formerly took an hour or more of active time (plus interperson transfer for
divided areas of knowledge that can add to days as the item awaits attention)
become a few minutes work. The improvement in quality and consistency alone
provides substantial customer satisfaction. The cost savings, the return on
investment, can be terrific.
The value of Expert Systems is greatest in preserving transient human knowledge
beyond the lifetime or availability of the source, for offering information and
suggestions in a consistent manner. Retired persons who have been called back
to consult like the idea of "immortality" as their usefulness is
captured for ongoing use in an apprentice for the people still working.
Knowledge from several to many experts can be combined into a "community
intelligence", making the Expert System better than any one individual
could be. This does not bode well for consultants who wish to compete with,
rather than assist the development of, Expert Systems. The information and the
suggestions are carefully installed as relevant data and decision rules into
the application-specific knowledge base. Pieces of the "company culture"
or how it really works sometimes show up in the expert's rules, which can lead
to beneficial (or otherwise) organizational changes. Decision rules are
processed during an Expert System's operation by the internal generic
"inference engine" software program. Unlike the way standard computer
programs work, Expert Systems select and adjust formulas to fit the situation.
Expert Systems are particularly successful where the knowledge base changes
frequently, such as diagnosis, forecasting, monitored controlling, and
Internally, the programs use an inference engine paradigm, a search and
connection approach to machine reasoning. Metarules control internal
priority of processing rules for resolving conflicts and advancing toward the
solution. The most well-known methods are known as forward and backward
Forward chaining, data-driven reasoning, works from the outside in, trying
every potential solution path against the facts and results of selected rules
until some combination leads to the goal.
Backward chaining, or goal-driven reasoning, begins at a defined probable final
objective as the conclusion and determines which of the facts and rules will
support that end by searching through the premises and applying validity tests.
The knowledge states and operators for transforming those states make up the
problem space. Knowledge guided discovery of a connected path, for the sequence
of operators which transforms the beginning state into the goal state, solves
Expert Systems commonly integrate forward and backward chaining, with the
selection of when to use each made by the internal inference engine. An
approach is to switch between forward chaining to each hypothesis and backward
chaining to determine its validity. The search stops at a final conclusion.
Some packages in this category use examples from a set of successful approaches
to given problems and develop their own rules. (They provide a translation from
human to Expert System using their inner program rules.) Such domain-independent
programs are called Expert System Generators or Shells.
Imagine including constructors for all of the DSIDE™ process decision
bases in an Expert System! (Some Expert System Shells interface with popular
spreadsheets, so the Weighting Template and a customized Decision Template, as
described in the Decision Process tutorial, would integrate well.) Of course,
that only applies when the expert being emulated uses those tools more
skillfully than others to whom they are available. Installing it into a hybrid
system with conventional programming capability to provide a seamless interface
also is possible.
Translation from a human expression of a rule to the form usable by a software
package can be difficult at best, virtually impossible at worst. As a knowledge
engineer, you must understand both the language of a specific field and any
peculiarities of the individual expert's verbal expressions, which may include
a distinct aversion to elaboration. (Don't try this cold. Study the field
enough to become humble from appreciation of your ignorance and earn a minimal
respect for your novice knowledge with correct use of the jargon.) The
field is much like System Engineering as discussed in another tutorial in this
Multiple ways of asking the same questions often leads to some inconsistent
answers that lead to new questions which improve the result. The expert may
invent a better way to do the task because of the frequent need to explain
processes. You could ask if the expert is willing to wager on the accuracy of
the answers as a level of probability. Remember, too, that experts are subject
to the same vagaries of judgment as anyone else.
Human evaluation of actual feasibility is required when the input statements
are known to be flawed, because provision of the best solution never is
guaranteed. Perhaps this is why Expert Systems perform best when the user is
expert in that field.
Most Expert Systems probably should be restricted to financial forecasting and
other types of planning and analyses where the rules are fairly well-defined
and effectiveness gains should be highly visible. Other fruitful areas are
diagnostic systems where the problem cannot be well specified, where most input
is qualitative natural language expressions, and where quality of categorical
associations in raw data need to be discovered with application of rough set
theory during data "mining." Don't start with a large, difficult
domain, no matter how critical, such as a major process control system.
Development of Expert Systems for complex problems requires thorough
comprehension of the computer as well as knowledge of the problem area.
Walk before you run, using the rapid development or incremental delivery
approach to provide increasingly added value while managing expectations. Your
first efforts should be with company internal processes, particularly those
which involve "bureaucratic rules" but are infrequently performed and
require "relearning" every time, to take advantage of greater initial
understanding. The customer service "help desk" system allows less
expert people to provide the initial telephone contact. (Governments should
provide Expert Systems for rental by anyone needing to fill out the forms
applying for certain permits, etc. Research involved in preparing an
international technical patent application comes to mind.)
The main technical difficulty with development of useful Expert Systems is
extraction of expertise from the experts. Unfortunate, but nevertheless true,
the expert may not actually understand and thus will be unable to coherently
explain the strategy for how a particular problem is solved. What they do
just "seems" right to them.
Further, the extracted knowledge is likely to contain assumptions and biases of
the Expert System programmer (knowledge engineer) as well as those of the
expert, further contributing to possible errors of implementation.
Some Expert Systems are based more on Simulation Modeling (described later in
this tutorial) than upon a rule-based paradigm. This allows detection of
problems not considered in the original design which a rule-based system would
miss and overcomes deficiencies in reasoning by experts. Simulation Models of
complex systems sometimes are very difficult to construct, however.
Nevertheless, the strictly logical, computational models mostly used for AI and
Expert Systems have been tried and found wanting in attempts to mimic several
critical areas of human intelligence. The paradigm has begun to shift
toward connectionist modeling concepts.
A big part of the new concepts is the admission that people are not rational,
no matter how nice and pure that would be. The rational models could not handle
the common sense aspects of intelligence, the interwoven experiences
that allow you to adapt to complexities of living as a human being.
Return to Table of Contents
The truly revolutionary departure from traditional computing, neural networks
are a development of attempts to model how the human brain actually works with
software and specialized hardware which forms a hybrid digital/analog computer.
Interest in neurobiology and the invention of electronic circuitry hardware and/or
computer simulation models of interconnected neurons and synapses,
neurocomputing, has emerged from a long dormancy and is accelerating. The
greatest strength of neural networks, their application niche so far, is in
pattern recognition problems.
Unlike standard computers, with data bits in specific locations that must be
remembered for recollection and use, the information is spread throughout the
neural network's memory system. As in the brains of creatures, many neurons are
interconnected with many other neurons. Each neuron (processing element)
contains a threshold activated summing input and a defined transfer function
for determining its output.
While neural networks do not emulate the brain at more than the most
rudimentary level, processing speeds can be several orders of magnitude faster
than traditional computers for the appropriate problems. A digital computer
processes everything in terms of binary states of 0 and 1. The neural network
processing elements have an essentially continous range (like probability)
from 0 to 1, inclusive. They automatically incorporate a form of fuzzy logic,
which is useful for certain kinds of problems where being human-like is more
Like the human mind, neural networks are a content-addressable memory
that brings forth an association through pattern matching. Retrieval accuracy
is proportional to the quantity of information contained in the input stimulus.
After thirty years of research, the key to current major successes was an added
layers approach with extra neurons hidden between the input and output layers.
Different overlapping sets of the hidden neurons become actively involved in
decisions about different inputs. In fact, you can't teach a neural network one
thing at a time. The whole set of facts must be used in the training, with
substantive differences between each item for best recall. As a model of the
brain's neural system, the current neuron simulations are a tiny fraction of
the complexity involved in the real thing. Nevertheless, that minute percentage
has accomplished amazing things in application areas involving patterns, such
as shape recognition, classification, and completion.
For instance, neural networks successfully work with fuzzy, inaccurate data and
can find patterns without specific instruction. Where AI requires that the
rules for reasoning a specific problem be programmed ahead of time, neural
networks adjust their interconnection weighting as part of their pattern
recognition training, providing a form of learning law for discovering
or inducing the "rules" from examples. High accuracy of results
requires iterative training with a large set of examples, although prefiltering
the examples can reduce required number of both training examples and hidden
The limitations also are substantial. Neural networks may not discover the
mathematically optimal solution to a problem and will instead produce a
"close" answer. They cannot be given a rule outright, but must be
repeatedly taught the rule through examples. The neuron processing elements
"learn" in self-organizing systems by adjusting their input
thresholds and weights in accordance with the goal of increasing their
"success" rate. The substantial time for this training is reduced
with supervised learning techniques such as back propogation, the adjustment of
previous layer connection weighting as well as that of the current layer by
presentation of the error produced during the earlier training.
Meanwhile, selecting the appropriate network type (many exist and the number is
growing) and developing it for a given task takes trial and error, tweaking and
tuning of number elements, number of layers, learning rules and transfer
functions. Network size may be inadequate for solving real problems although
sample problem and training data were handled. Further, as a network becomes
more adaptive, its responses become less predictable. (A response that
certainly is human-like!)
Return to Table of Contents
As the hardware and software simulations of neural networks continue to develop
and improve, the next wave of advances in AI are likely to be from further
expansion of their combination with Expert Systems. When the embedded neural
network performance is satisfactory for the purpose, this results in the best
of both worlds. The neural network determines the likeliest situation and the
Expert System selects the action to take and performs it.
The neural network can discover the rules in example situations.
Therefore, one approach to answer justification is to use a neural network to
generate the knowledge base, then to use an inference engine for interpretation
of the knowledge. Each system seems to have what the other lacks, so a
combination that minimized the problems of each was the next natural goal. A
neural network applied to process performance monitoring can be taught to
defer to an expert system when variances become too large, as a form of
Management by Exception.
Building both types is not double effort because neural networks may be
developed in much less time than separate expert systems and the experts are
used only to revise and tune the rules.
With all the progress, its only a start. How the brain "knows"
something to be true is not reproducible in an algorithm. Inspiration, the
sudden production of an ingenious or original idea, will continue to elude the
machine approach for the foreseeable future. A standard System Engineering
problem, establishing performance metrics for the people, applies here as well.
How do you measure progress of a knowledge engineer along the way to developing
an acceptable Expert System?
Return to Table of Contents
Hybrids made from combinations of neural networks and AI/Expert Systems which
apply fuzzy logic as well as logic based on crisp sets. As stated earlier, the
positive strength of neural networks is pattern recognition. Fuzzy logic,
working with the output of neural networks, assists development of structured
rules so previously intractable problems may be solved with incomplete rules
specification. Expert rules setup the neural net, to jumpstart its learning
process. The requirement for an intermediate mathematical model is skipped and
rules are derived from the behavior exhibited by the neural network. The rules
change as the neural network learns and are immediately accessible, speeding up
research as well as applications of the systems.
Return to Table of Contents
The concept of an executive thought support tool implies either support for
managerial communications or personal contact with other key executives before
a DSS can be more than one of many tools that are available to assist decision-
making. The benefits are most obvious in environments where decision solution
convergence to some group consensus is essential. Experiments in automated
decision conferencing have led to development of the Group Decision Support
Systems installed in special meeting rooms with a physical arrangement that is
heavily dependent on recent developments in information technology for
electronic networking of the computer workstations. The basic idea is that
better decisions may be made if you can remove any supervisor/subordinate
pressure and/or any natural leader influences.
Group member behavior during a negotiation problem can be substantially
different when the face-to-face element of the meeting is not available to
anyone. This is accomplished by placing each group member into a separate plush
workstation booth where input at the keyboard is anonymous. A common large
screen display, whose content is controlled by an assigned group leader, is in
view of all participants. The group leader coordinates all common screen
presentations and places jointly agreed constructions either on that screen or
into a common data base. A public group data base is the source for a
participant's development of individual problem models. The group leader is
sent ideas, results, or actual models used by participants, as appropriate.
A single meeting room is not essential to fulfill the concept, making global
long distance, international GDSS possible. New, high resolution screens could
display the common screen information and results in a "window" while
participant problems are in the remainder of the screen. The GDSS approach
could greatly assist EIS utility, if key supporting executives are participants.
Another desirable feature is automatic generation of Decision Models, based on
historical modeling requirements, with an easy adaptation to the new
requirements. The Fourth-Generation Languages are an attempt to come close to
that capability, through generation of database or modeling programs in
response to either menu selections from a control program or from applying
specific natural language statements. The resulting program will be simplistic
or else the user must possess some programming capability, although this
problem is alleviated a bit with an Expert System Shell front end to lead the
user through the difficulties.
Return to Table of Contents
The difficulty of integrating Operations Research tools into a single DSS
package leaves many useful methods for employment as separate general models.
Still, for those equipped with adequate knowledge to properly use them, the
math-based subsystems have key advantages related to the DSIDE™ process.
Specifically, they possess theoretical justification for their application to
specific portions of the problem and provide Decision Research support for
improvement in your future decision-making processes.
The subsystem category is where you should list most of the Decision Support
related software currently offered for use with personal computers (PCs). After
you peel the marketing hyperbole away and examine skeletons of various Decision
Support packages, you will find only a somewhat reduced subset of the
requirements for an ultimate DSS.
Such offerings are classifiable as either decision aid or decision modeling
packages. Whatever they are called, their objectives are identical. Most
programs in the first category have you assign weights or values to each
decision factor, without much in the way of provision for removing subjectivity
in the weighting process. (Even graphical assistance to a weighting assignment
is insufficient because it still is much too easy to confirm a preestablished
Also, the typical use of a simplistic "musts" and "wants"
approach without any revelation of the full range of alternatives for an
important factor can seriously oversimplify the problem. Admittedly, that is
much better than modeling without any structure to your decision and
could be sufficient to arrive at a best decision for many non-complex
performance analyses. In business problems, however, complexity is a standard
Return to Table of Contents
Integration of the mathematical tools within a DSS shell allows the useful
techniques to be made useable. Arrangements of math models, like Expert Systems,
should be kept flexible for application to specific, suitable problems.
Many of the tools offered to you for making decisions would suffer somewhat
under price/performance comparison with the process outlined in this book,
although some clerical work is automated in them. Few commercially available
decision aid systems, if any, reveal to the user the algorithms employed within.
A mystique surrounds the program's capability as simple mathematics become
jealously guarded "trade secrets."
Widespread public release of program source code is not a sought objective, as
an "intellectual property right" in the program should exist. However,
application of requisite numerical methods is not easy, and user knowledge that
proper safeguards are in place when using tricky computer algorithms can
prevent program tools misuse and costly errors resulting from user ignorance.
You may be told that the reason some program's price is so high is that it
includes technical support when you require it. That support may extend only to
your being told how to solve a specific problem, however. Of course, you may
choose to ignore implications of standard disclaimers about a computer
program's possible fitness for any advertised use. (That is only the lawyer's
attempt to forestall a lawsuit which otherwise might arise from a user
inconsiderate enough to argue that the product should work as claimed.)
Nevertheless, when you do not know precisely what program action takes place
under specific circumstances, you cannot be certain it will be appropriate for
your problem. That is true, both with and without program support by the
manufacturer. (Trust a program developer who reveals program processes, for
what is not known can indeed hurt you.)
You must know what factors are not accounted for by the model design, what
conditions can change the model precision or destroy model assumptions. When
you do know how a program works, you can put bounds on the validation problem
and have a chance of success. You won't have immunity to effects of GIGO, of
course, but the decision models you construct could possibly be valid. Not
concerning yourself with this issue can lead you to uncritically accept the
computer aid's assistance and to perform your decision-making less well than
you would have without a computerized decision aid. You could well be misled
into working up short-term results which lead to less than optimum long-term
Return to Table of Contents
The value of computer utilities goes beyond their immediate or apparent purposes,
in that they can lend insight to managers who are struggling to grasp meaning
hidden in assembled facts. Insight is not the same as infallibility, of course,
plus the tools work less well in health, education, welfare, and other
socially-oriented areas where objectives and measures are less quantitative.
Computer-based data processing tools greatly assist decision-making but are not
fully equipped for the necessary work. The following paragraphs describe tools
which are known to be useful in that limited sense.
Return to Table of Contents
Other useful tree forms than inverted exist. A Decision Tree works from the
starting node at the left to graphically depict the evaluation and choice
branches of decisions with regard to events and their potential results from
action choices and divides into two or more identified results from the choices,
leading to two or more final outcomes. Each branching route shows a separate
alternative or course of action within the multi-stage decision. The convention
is squares for action-fork nodes and circles for event-fork nodes. Results are
assigned a numeric outcome or probability for each of two or more routes to their
next decision nodes. At that point, conversion into a payoff table is possible
and each choice has an earned numeric total for ranking the options. (Does that
seem familiar?) The decision tree expands by treating each decision node as the
starting node for a new expansion. The expansion process continues until you
assign a final set of decision nodes and their expected values or profits. The
best route (or selection of decisions) is that which maximizes total profit.
The Decision Tree chronology is shown left to right. Computation of expected
profit proceeds from right to left by summing the values multiplied by
probabilities into the next leftward result node. The next left decision node
selects the highest profit just computed for the result nodes. Computation
using the resulting payoff or profit proceeds leftward in that fashion until
you assign the maximum profit to the final, leftmost decision node. The
Decision Tree theoretically shows the optimal action to take upon arrival at
the future decision points. It uses a fairly straightforward process, combining
action choices with the results of those actions or events (and their
probabilities of occurrence), but has two distinct, important disadvantages.
For complex problems, particularly those involving information valuation, the
decision branches proliferate at a terrific rate and make manual methods
virtually impossible. Removal of dominated action choice branches is
simplifying or pruning the tree. The decision trees for such complex
problems are more likely to confuse the decision maker than contribute to
greater understanding of the problem until extensive pruning is accomplished.
It is easy to descend to levels so deep that the requirement for detail
obscures critical decision issues.
Worse, however, is that decision tree modeling requires adequate foreknowledge
of the profit and probability to assign to each route and node for obtaining
the outcomes. The insurance industry has developed actuarial tables, but such
knowledge usually is very difficult to obtain. Probability knowledge usually is
unobtainable if you are an independent consultant whose expertise is in other
fields or if you work for a smaller company that could not afford to develop
the necessary data. Most people have problems treating new or unfamiliar data
properly for making changes to their Decision Model.
The principal advantage of the tree diagrams is depiction of sequences for
multi-stage decisions over an extended period, because few decisions can be
made in isolation from other decisions. Knowledge of the decision structure can
help even when detail information is uncertain. You then at least understand
all the options and various consequences of their selection. Options you might
previously have rejected out of hand become possibilities.
Return to Table of Contents
Simulation Modeling often has a close but indirect relationship to Technical
Performance Measurement (TPM). Computer programs historically have been a
powerful means to construct mathematical models. Simulation capability has
grown to include complex digital and analog electronic systems, such that
proposed designs may be evaluated well before they are built for trial.
Increasing complexity will make prototypes too expensive, adding impetus to
simulations. Increased computing power has brought simulation models more into
employment as valid representations of actual systems operating in their
predicted environment. An extension is their manipulation in ways that would be
expensive, impractical, or even dangerously impossible with the real thing.
Performance analysis with simulation models can provide predictions of
throughput, response times, and utilization of resources.
For business, that makes Simulation Modeling effective for scheduling
processes, estimating demand and capacity needs, some forecasting, and
discovering and addressing potential problems for Project Risk Assessment (see
the tutorial on that subject).
Expected behavior of actual systems under virtually any scenario can be
inferred, providing good parametric estimates for input to your Decision Models.
It is an engineering rather than a Management Science or Operations Research
approach, but is just as usable by businesspersons. The results usually are
numeric and consistent with decision criteria.
A theoretical system, which has critical functional parameters and attributes
that are virtually impossible to measure, should have accurate estimations. An
example is a queuing or waiting line model for validating required capacity of
a proposed service facility. Where attributes are best characterized by a high
dependency on other system attributes, on probability distributions, or are
difficult to analyze mathematically, discrete-event simulations can provide
usable predictions of complex candidate system performance under worst-case
In discrete-event simulations, event arrivals are obtained in two ways. First,
they can be predetermined and provided as starting input from a list. Second,
each arrival of a specific type computes the arrival time of its successor with
a function. The function can range from a continuous probability distribution
to an arbitrarily fixed step series. The choice may be determined by a pseudo-
random number generating function, so repeated runs can evaluate single changes
with the same input, or randomly selected starting values can evaluate
multiple runs of unchanged models.
Continuous simulations depend on sets of numerical equations as the principle
system model, for simulating non-stop processes that change continuously over
time such as petroleum refining or steel-making. Instead of updating the
simulation model's world time on occurrence of the next discrete event, time is
changed in small increments and all time-based formulas are recomputed. This
approach costs more computer time than event-driven simulation.
Some complex models will require a combined approach, where continuous time
updating occurs until a specific condition defines an event that shifts the
model into discrete-event updating until the continuous updates are required.
All simulations share three properties:
Simulations are much easier to construct than formal math models duplicating
complex systems, particularly with use of new object-oriented programming
system (OOPS) constructs. When expert opinion provides input hypotheses, the
derived predictions are useful for Decision Model attribute performance
measures. One analyst can explore a multitude of concepts in a brief period,
when compared with the time required for building even a simple physical
prototype. You can learn from mistakes without harm to anything, gain insight
and really understand the problem.
It is an accelerated version of the process employed by scientists conducting
basic research. A tentative formulation is explored, which gives rise to new
questions, which lead to theoretical insights and thence to an improved
formulation, which ...
Clearly, a valid, verifiable simulation of selected or all portions of the
candidate system, under proposed operational use, may be the only possible
means for accurately establishing what capability it should provide after
incarnation as a physical realization. The model responds to external stimuli,
internal conditions, and performs operational functions in simulated real time.
Simulation Modeling then is the best approach to estimating performance values
for input to a decision model, to providing a check on the anticipated results
and to conducting a Sensitivity Analysis for determining which data are
required at what accuracy. The object is to gain confidence in model validity
before making decisions with the results, providing an "insurance"
when working with paper concepts. Unanticipated results can arise from
unsuspected interactions of system parameters, which gives more insight and
knowledge to the users. The final result should be a sharable and fully
General models are easy to build but of limited value, however, so you must
anticipate the uses and expect to iteratively refine the model. Its structure
should be modular, to permit timely modifications for support of decisions.
Simplification then means that only those system features important to the
project are modeled because you need not duplicate reality. For decision making,
you only need enough realism to evaluate the proposed changes and simulate
potential risks. Add detail only if absolutely necessary.
Surprisingly, simplistic seeming structures deliver complex, lifelike behavior
and can relieve more ignorance than detailed special case models. It is akin to
discovering a general behavioral theory for formulating the underlying
assumptions in a class of system problems as you find that many of the
simulation constructs are appropriate to problems in apparently unrelated
disciplines. Thus, as in many things, clearly defining the problem with a
System Engineering approach gets you well along toward the solution.
Nevertheless, the simulation model resembles other decision aids with ability
to assist analyst communication with the decision maker. Model output must be
understandable and appear valid to the simulation customer, whether presented
interactively with graphics while the simulation is running or as results
following each run. Interactive output allows observation as waiting lines
change length or as the system undergoes other such dynamic system behavior
during the simulated time.
Generic models may be tailored by differing users to solve similar problems.
Models may be interconnected with other models for simulation of subsystems
within a system, helping to prevent any single model from becoming overdetailed
and cumbersome. Hierarchical, structured models may have selectively enabled
modules to reduce execution times or to focus analyses when the area of
interest is contained.
Another important point is that simulations can use "raw" or actual
historical input data, statistical distributions, or some combination of the
two. Therefore, statistical analysis frequently is incorporated in simulation
modeling languages, providing illusion of inferential validity. Typically,
however, available data are unlikely to be all useful and appropriate to your
Simulation models generally do not optimize, so they will not automatically
select the best choice from alternatives, but a model of each alternative may
be applied to the problem and the best result indicates the best system. As in
the DSIDE™ process, the various modeled figures of merit may be combined
to provide a single output value.
An extremely valuable aspect of computer program simulation modeling is the
forced understanding of a problem required for development of even a semi-
realistic model. Your expertise rises as you build the model. Confirming
infeasibility of alternatives may lead to a new, feasible approach. Making a
large series of runs can build the sample required for confirming probability
distributions and associated parameters for arrival times associated with the
distance travelled and other control variables for Just-In-Time manufacturing
facility service by a supplier.
Simulators train people to do many things, so training someone to build
simulation models is an appropriate task for a simulator.
Return to Table of Contents
Optimization modeling is knowledgable application of selected mathematical
programming techniques from Management Science or Operations Research to
determine best courses of action to take in highly structured, well-defined and
well-understood areas, such as engineering design or resource allocations to
maximize benefits or minimize cost.
Linear, mixed integer, non-linear mixed integer, dynamic programming models,
stochastic process and network optimization models represent this class of
tools. Linear programming and other optimization models successfully solve
scheduling, transportation route, investment mix, and other similar resources
distribution problems where options are many.
Linear programming is computerized solution of multiple linear algebraic
formulas with multiple variables. (Linear means that no variable is raised to
square or higher power.) Some of the formulas are solution constraint
expressions such as quantity limits. An objective function is either minimized
or maximized during simultaneous solution of the equations. A form of linear
programming that seeks compromise solutions accomplishes goal programming.
Return to Table of Contents
Evaluating results of various alternative selections often will require
projection of each environment into the future. For that, as for other forms of
modeling, you want reality represented simply enough to understand and use for
corporate strategic or shorter term planning.
Examples of prediction modeling are trend line forecasting models which use
various smoothing techniques, budget models, various extrapolation formulas,
and curve-fitting models. People perform poorly at consistently integration
data from a wide variety of sources, so statistical models help. Such models
often are useful for determining that opting for doing nothing (which projects
the current situation) will lead to a future problem, as well.
Extrapolation from the present is always uncertain and it gets worse the
farther in the future you seek answers. When you have lots of historical data
you can check the predictive value of a model by checking how well it projects
actual trends. Even a model having high correlation of predictions with
historical data values, which is all one could ask for, can be wrong about the
future when some new external occurrence changes the rules. Therefore, never
extrapolate from past data without massive disclaimers. The longer term you
propose to predict, the larger the disclaimers should be.
No matter how well founded, if the prediction involves potential failure or
extreme pessimism, the attentive audience will be small. Developments in other
industries may have overwhelming effect on the acceptance of your product.
Competitive technologies can render even new products obsolete at a stroke, as
when electronic calculators supplanted slide rules and electromechanical adding
machines. Failures outnumber successes by such a wide margin that you can learn
little from studying only the successes. The lesson for you is to temper
enthusiasm with pessimistic reality while others are caught up in the fad. Use
multiple, simple methods or combinations of forecasts to see if all approaches
point to the same result and challenge the underlying assumptions. Faulty
assumptions automatically provide flawed forecasts. In other words, sometimes
you must use some subjective judgment to see if all potential impacts on the
forecast were sufficiently considered.
Regression analysis is a mathematical process that measures the apparent
relationship of an item of interest with items thought to influence it. The
degree of "fit" for predictors helps you decide on their usability
in forecasting results. Avoid fancier mathematical methods to concentrate on
finding out what makes better forecasts. One such item is plotting historical
data on the same graph as the forecast. Another is forecasting under several
of the most likely scenarios.
Return to Table of Contents
Related to Prediction Modeling, but indirectly, scenario analysis is assembling
a set of plausible alternative futures. Scenarios may be presented as flowing
narratives or prose descriptions of what the future will be like with a
particular adoption of the new technology. The trail from the present to the
future may be explained. Multiple scenarios explore each plausible branch in
the paths leading to each alternative future. The business plans that are
adaptable to as many alternatives as possible reduce the risk that ensues when
"all eggs are in one basket."
As with regular forecasting, attempting to assign probabilities to multiple
scenarios leads to assuming that you should plan for the one considered most
likely. That removes the benefit of developing a robust strategy to accommodate
Instead, work at making the scenarios equally probable and assign thematic
labels indicating different paths of arrival, such as slow acceptance, heavy
competition with similar technology, heavy competition with competing
technology, rapid price decreases, and so forth. This forces consideration of
the unpleasant, providing a reality check.
Return to Table of Contents
An area of concern is the provision of statistical tools. Statistical
procedures help you to separate distinct events with assignable causes from
random occurrences. Except for some quite complex Simulation Models,
forecasting from trends is difficult without employing statistical techniques.
Models that will produce invalid output at terrific speed are easy to
construct. To avoid errors, you must know what the result really means and what
logical processes to employ when applying a particular statistical measurement.
The purpose of performing statistical tests on a population is to allow you to
infer one or more valid conclusions regarding a group of observations or
measurements and allow discarding irrelevant information. The basic concept of
measurement assumes repeatable trials, but many events are unique. The sample
must be random, of sufficient size and assume a proper underlying distribution.
Standard deviation is a measure of data tendency to gather about the average or
arithmetic mean. Large standard deviation indicates a large dispersion and
greater risk in assuming that the computed mean is reliable for making
predictions. Dividing the standard deviation by the expected value gives the
coefficient of variation as a measure of the data dispersion or relative risk.
Many books reveal how to compute those values and perform statistical tests.
Detailed discussion would be straying from the subject of this book, so the
mathematics will not be shown here.
A confusion between statistical significance and a difference important to your
overall system can distract you from really critical issues. When you have
options in your program to pick an improper analysis for the data required by
the conceptual design of your model, you had better know what you are doing or
work within a plan developed by someone with a strong background in statistical
For typical DSIDE™ process problems, training and experience in the
design of engineering experiments would be preferable to experience with
sociology/psychology forms of statistical inference.
Measurement errors for tangible systems aspects (equipment performance) often
have the laws of physics as validity bounds.
Measures of indirectly observed human behavior, however, easily can be based
on what may be politely called "that which is not so." Beyond
outright lies and other distortions of fact, respondents to surveys tend to
offer idealized or popular answers rather than relevant information. It is
extremely difficult to construct survey material that avoids biasing the
results. Of course, if your wish is to support a particular conclusion,
suitably emotion-laden or "politically correct" questions are quite
easy to frame.
Finally, because no universally useful statistical test exists, it frequently
is best to avoid using statistical tests to develop either Simulation Model or
Decision Model parameters. Resist the strong temptation to summon those
routine(s). Rely, instead, on as many directly measurable attributes as
possible or you may find yourself claiming an unsupportable statistical
Return to Table of Contents
You may be aware that virtually all popular personal computer systems offer
nearly as many competing spreadsheet programs as word processors. Primarily,
applications of spreadsheet programs involve financial modeling and investment
planning, although an experienced, careful spreadsheet analyst could perform a
decision sequence similar to the DSIDE™ process with one. Spreadsheet
programs have become popular for modeling systems where the decision parameters
are comparable as cost elements. (Deficiencies in that approach are some of the
problems that led to development of the decision process described in another
A spreadsheet analysis is difficult to independently audit for propriety. That's
because it can't assist construction of clearly understandable presentations of
analysis conclusions without some tailored report generation. The
"integrated" programs can help a lot with that, of course, but this
tutorial assumes using a stand-alone program for greatest coverage.
Advanced spreadsheet users can and will do whatever they deem appropriate, of
course, but going to that depth about peripheral aspects of a spreadsheet
program's operation here would again overcomplicate a moderately complex
subject. Spreadsheets do allow you to construct models where you can see the
automatically calculated result of a single input, thus their power for
economic or financial analysis. Certain forms of analysis are much more
difficult to fit into a spreadsheet approach, however. An example is linear
programming, which heavily uses matrix algebra. It has been accomplished,
with add-in programs, but suffers from similar shortcomings to the use of
statistics—the need to know when a given approach is suitable, as well
as how to use it, for best results. The techniques say yes or no
while the best answer often is maybe.
If you have any questions on this tutorial subject, please contact
the author as listed below. A response will occur and a FAQs section
Return to Table of Contents
This tutorial is presented by:
Tutorial Author: email@example.com
Other subjects: firstname.lastname@example.org