Robert F. Bordley

GM University

Journal of Statistics Education Volume 9, Number 2 (2001)

Copyright © 2001 by Robert F. Bordley, all rights reserved.

This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor.

**Key Words:**
Decision theory; Goal; Introductory statistics; Reliability theory; Utility function.

There has been much concern about making the curriculum for engineering statistics more relevant to the needs of industry. One proposed solution is to include decision risk analysis in the curriculum. However, the current coverage of decision risk analysis in statistics textbooks is either nonexistent or very introductory. In part, this reflects the fact that decision risk analysis, as currently taught, relies on the complex notion of a utility function.

Recent research in decision theory suggests a way of comprehensively and rigorously discussing decision theory without using utility functions. In this new approach, the decision risk analysis course focuses on making decisions so as to maximize the probability of meeting a target. This allows decision theory to be integrated with reliability theory. This course would be more comprehensive than the conventional introductory treatment of decision theory and no more difficult to teach. In addition, integrating decision theory with reliability theory facilitates its incorporation in curricula that currently exclude decision theory.

Many authors have emphasized the need for improvements in statistical education. As Bisgaard (1991, p. 274) writes:

"There is an implied criticism of the way the statistics profession has approached what has become known as engineering statistics. And that criticism not only concerns teaching style, but also content and organization. I dare say that in the past, we have failed miserably."

Smith (1998, paragraph 1) noted that this concern also applies to other courses in the statistics curriculum:

"A radical reform of introductory statistics classes has been advocated by many, often motivated by observations similar to Hogg (1991, p. 342):'students frequently view statistics as the worst course taken in college.'"

One of the most basic concerns focuses on relevance. As Romero, Ferrer, Capilla, Zunica, Balasch, Serra and Alcover (1995, paragraph 6) note:

"The most common criticism of the teaching of statistics in the United States is that it is too academic in focus, excessively theoretical and divorced from the real problems that can appear in the industry and business world."

Barabba (1991, p. 1) emphasized the importance of addressing these concerns in his presidential address to the American Statistical Association:

"If we are to ensure continued societal support, we as a profession of statisticians must make ourselves more clearly understood and valued by those who use our work. Without this understanding and appreciation, statistics will not be fully used. If we continue on our current course, statistically based information will, at best, be used only to confirm earlier predispositions, and at worst, be done away with in times of scarce resources."

Given such concerns, Snee (1993, p. 150) concludes:

"Articles in recent issues ofThe American Statisticianhighlight the growing feeling that statistical education is in serious trouble and that changes must be made ... changing the content of statistical education is needed to help students create value for statistical thinking."

How should the content of statistical education be changed? Hoerl, Hooper, Jacobs, and Lucas (1993, p. 280) advocate greater focus on managerial and strategic decision-making:

"Numerous authors have discussed the lack of recognition, respect and effective use of statisticians, particularly in industry. Many have focused improvement efforts at engineering schools, business schools, better selling of statistics or better public relations by the ASA. There appears to be a general consensus that statisticians have typically accepted too narrow a role, being focused more on the statistical methods being employed than the problem addressed. For example, Snee noted the three distinct levels at which a total quality philosophy applies: strategic, managerial and operational. Statisticians, he claimed, tend to ignore the strategic and managerial levels, and concentrate only on the operational, that is, on statistical tools. Very few statistics courses or textbooks teach synthesis of statistical methods into a strategy to solve a class of problems."

In addition, as Hahn and Hoerl (1998, p. 197) emphasize,

"Today's statistician must strive to get involved in the strategically most significant projects."

What specifically needs to be done to focus the curriculum on "a synthesis of statistical methods into a strategy to solve a class of problems?" Barabba (1991, p. 1), based on decades of practical experience at Kodak, the Census Bureau, General Motors and private consulting, suggests a possible answer:

"Now let me offer our profession an opportunity with potentially greater rewards. The opportunity is to help decision-makers use information to move beyond the realm of 'just seeing the facts that fit' to a higher standard of improved decision quality that leads to wiser decisions. Let me offer a protocol which allows providers of information to meaningfully contribute to quality decisions: a framework that we use at General Motors to enhance the decision process, decision and risk analysis. Through the process, the different functions, perceptions and skills of information users and providers are acknowledged and structured to improve the decision process. The problem is identified, alternatives are analyzed and the best alternative is selected, all with concurrence of the decision-maker and the information providers."

Work by Sharpe and Keelin (1998), Kusnic and Owen (1992), Howard (1988) and Bordley (1998) provide examples of successful applications of decision and risk analysis to high-level corporate problems. As Howard (1988, p. 695) noted:

"Today it is difficult to find a major corporation that has not employed decision analysis in some form."

Hence incorporating decision and risk analysis into the curriculum might address some of the problems noted in statistics education.

Barabba recommended decision risk analysis to the statistical community ten years ago. Despite this recommendation and the formation of the American Statistical Association's Section on Risk Analysis, an extensive review of introductory statistics textbooks shows that decision risk analysis is still not widely taught in statistics. One exception is the textbook by Levine, Berenson and Stephan (1998). Decision risk analysis is also much more commonly taught in operations research textbooks (Chelst 1998; Anderson, Sweeney, and Williams 1994; Gould, Eppen, and Schmidt 1991; Hillier and Lieberman 1990; Ragsdale 1995; Winston 1994). As the next section will show, even these textbooks provide fairly limited coverage of decision theory.

The purpose of this paper is to help expand the teaching of decision theory in statistics. The next section will examine why decision theory is not being more widely taught. The third section will propose a solution, based on some recent technical breakthroughs, which integrates decision theory with reliability theory. This section illustrates how a new approach to teaching decision theory might be taught. The fourth section shows how this new approach might be integrated into a statistics course for engineers. The last section discusses the author's experience in applying this approach.

Many textbooks specializing in decision theory contain four general modules:

- Problem-structuring: lists the relevant decision alternatives, the uncertain "states of nature" relevant to the decision problem and the payoff for the different outcomes associated with each decision alternative.
- Probabilistic analysis: introduces the criterion of maximizing expected monetary value. After illustrating its use (possibly with a decision tree), calculations of the value of perfect and imperfect information are discussed using Bayes" Rule.
- Utility analysis: shows that expected monetary value ignores individual attitudes toward risk and hence is an unreasonable rule for high-stakes decisions. Introduces the notion of utility as that value
*u*making an individual indifferent between receiving that consequence and receiving some best possible consequence with probability*u*and some worst possible consequence with probability 1 -*u*. The utility function is used to convert the payoff matrix into a matrix of utilities. The criterion of maximizing expected monetary value is modified by replacing monetary values with utility functions. - Multiattribute utility analysis: discussion of multiattribute utility theory as an extension of utility theory to handle multi-criteria decision-making.

Many introductory textbooks in operations research provide only a very high-level discussion of multiattribute utility theory. Of those statistics textbooks discussing decision theory, many focus on only the first two modules and generally ignore serious treatments of utility.

As Chelst (1998) emphasized, ignoring utility theory (which allows the student to handle risk-preferences) and multiattribute utility theory deprives the student of some of the techniques needed for many successful applications of decision analysis.

There are, of course, good reasons why decision theory is not being taught more extensively in statistics courses. The teacher's time is limited and there are many other statistics topics that also need to be taught. Thus Hogg's (1985) description of an idealized and ambitious statistics course for engineers includes:

- Data collection
- Basic probability
- Basic statistical inference
- Regression modeling
- Design of experiments
- Reliability

All of these topics are clearly essential, but Hogg's list never even mentions decision theory. There simply is not room for another topic in his curriculum.

How can decision theory be incorporated into an already packed curriculum?

First note that the conventional treatment of decision theory begins with a discussion of expected value maximization and then introduces utility as an enhancement of expected value. Note also that the concept of utility - as defined by decision theory - is unique to decision theory and does not appear in any other part of Hogg's curriculum. This suggests that the notion of a utility function is esoteric and hence is harder to teach than other aspects of decision theory.

Recent technical results (Castagnoli and LiCalzi 1996; Bordley and LiCalzi 2000) establish that the traditional decision theory criterion of maximizing expected utility is equivalent to maximizing the probability of exceeding a goal in the presence of background uncertainty. These results indicate that decision theory can be taught, in all of its generality, without using the concept of utility. As the next section will show, this provides a way of teaching decision theory as an extension of reliability theory. Since reliability theory is already part of Hogg's curriculum, teaching decision theory as an extension of reliability theory provides a very simple way of introducing it into an applied statistics course for engineers.

Assume that the instructor has already familiarized students with basic probability theory and random variables. The remainder of this section now presents an example of a possible lecture using the target-based approach to decision theory.

Suppose your long-run goal is to accumulate wealth with a present value of *t* ten years from now. Your wealth ten years from now is determined by the sum of the present value of your earnings this year (which we might call *x*) plus the present value of your earnings in all subsequent years (which we call *y*). You will have achieved your objective if *x + y* exceeds *t* dollars. Let *T = t - y* define the amount of wealth that must arise from your earnings this year in order for you to reach your goal.

Your objective, this year, is to make decisions with potential short-term consequences, *x*, which maximize the probability of exceeding *T*. Since you are uncertain about your future earnings in subsequent years, *y* is uncertain and should be represented as the random variable *Y*. Thus *T = t - Y* is also a random variable. Hence getting consequence *x* gives you a probability *Pr ( x + Y > t ) = Pr ( x > T )* of meeting your goal. If you are offered a gamble, *X*, giving you a payoff of *x* this year with probability *Pr ( X = x )* for various values of *x*, then the probability of meeting your goal by taking the gamble is *Pr (X > T )*.

Consider the following example:

Suppose your only concern is having wealth of $1,000,000 by the time you retire (which is ten years from now). Suppose you also have mineral rights on a piece of land that you believe may have oil underground. You currently only have $200,000 and it costs $200,000 to drill for oil. If you drill, there is a 5% chance that you will strike oil. If you strike oil, you can sell the property for $5,000,000 this year.

To analyze this problem, suppose first that your wealth ten years from now will be exactly equal to whatever you earn from the oil property. Consider the following questions:

- What is the probability that you will be able to meet your goals if you do not drill? In this case, it's zero. There's no chance of having a million dollars ten years from now if we only have $200,000 now.
- What's the probability that you'll be able to meet your goal if you do drill? In this case, there's a 5% chance of striking oil if we drill. If we do strike oil, we will have five million dollars, which guarantees that we will achieve our goal. Thus the probability of meeting our target if we drill is five percent.
- If you are only interested in maximizing the probability of meeting your goal, should you drill? In this case, the answer is clearly yes.

In analyzing this problem, we assumed, quite unrealistically, that our wealth ten years from now was solely equal to the wealth, *x*, we earned from the oil property.

It is important to remember that there are a lot of other wealth-creating events that could happen in the next ten years. We might win a million dollars from the lottery five years from now. We might have drastic medical expenses that reduce our wealth by half a million dollars. Thus our wealth ten years from now is really the sum of *x* and some random adjustments in that current wealth. If we define *T* to be the difference between our goal and these random adjustments, then maximizing the probability of achieving our goal is the same as maximizing the probability that our near-term payoff exceeds *T*.

Because of our uncertainty about what the future may bring, we are uncertain about the value of *T*. To reflect on this uncertainty, we need to specify the probability distribution of *T*. This probability distribution needs to be specified based on our limited knowledge of the possible wealth-creating events that might happen. We now consider some possible specifications of the distribution of *T*. We will show that our solution varies depending upon how we specify the distribution of *T*. We will then discuss which specification of the distribution of *T* is best.

One of the simplest assumptions is to treat *T* as uniformly distributed. As a result, the cumulative probability of achieving the goal given *x* is *x* / 5,000,000 with *0 < x <* $5,000,000.

Given this assumption,

- The probability of achieving the goal given we do not drill is 200,000 / 5,000,000 or 4%.
- The probability of achieving the goal if we drill and strike oil is one. The probability of achieving the goal if we drill and do not strike oil is zero. Hence the overall probability of achieving the goal if we drill is 5%.

Thus drilling maximizes the probability of achieving the goal.

The uniform distribution presumes that the required amount of wealth you need to earn from the oil well is uniformly distributed between zero and five million. An alternative, more optimistic, assumption is to presume that the amount of wealth we need to earn is exponential. That is, the cumulative probability of achieving the goal *x* is exponential and is given by 1 - *exp ( - x / R )* for some constant *R* > 0 and *x* > 0.

This formula indicates that the amount we need to reach our goal is more likely to be between $0 and $2,500,000 than it is to be between $2,500,000 and $5,000,000. Hence it suggests that we may reach our goal even if we do not do that well on the oil venture.

To use this formula, we need to assess *R*. To assess *R*, suppose there's a 50% chance that you'll meet your goal if your payoff this year is $2,500,000. Then we can calculate *R* to equal $3,600,000. Note that *R* is the expected value of *T*, that is, it is the expected amount required in the first year to ensure that the individual meets the long-run goal.

Given this exponential assumption:

- The probability of achieving the goal given we do not drill is
1 - which equals 5.4%.*exp*( -200,000 / 3,600,000 ) = 1 -*exp*( -0.056 ) - The probability of achieving the goal if we drill and strike oil is 1. The probability of achieving the goal if we drill and do not strike oil is 0. Hence the overall probability of achieving the goal if we drill is 5%.

Thus not drilling maximizes the probability of achieving the goal.

Our examples with the uniform and exponential assumptions show how our decision about whether or not to drill depends upon our assumptions about *T*. This makes it important to think about our long-term goal and about how future events may adjust whatever we earn from the decision at hand.

In our example, the random variable *Y* (and thus *T*) reflects random changes in our wealth that might arise over future years. If *Y* represents the result of many small changes in wealth, then we might want to assume that *Y* (or the logarithm of *Y*) is normally distributed. This is what is generally done in modeling price change on the stock market. Specifying this distribution requires a specification of the average percentage rate of increase in our wealth and the variance in this rate of increase.

If we specify a large variance, then we are saying that we know very little about future changes in our wealth. When the variance is very large, our distribution over *T* will be approximately uniform and we will tend to take calculated risks. This is referred to as being "risk-neutral." In our example, risk-neutrality led us to invest in the oil well.

When we specify a small variance, we are saying that we are very sure about what we need to earn this year. If "playing it safe" will give us the return we need this year, we will play it safe (which is what we did with the exponential distribution). On the other hand, if "playing it safe" won't give us the return we need, we will take extreme risks. Hence we are very sensitive to risk when the variance is small.

Thus different assumptions about *T* imply that individuals have different attitudes toward risk. Conversely, suppose that we know an individual's attitude toward risk but do not know whether the individual is explicitly trying to maximize the probability of meeting some long-term goal. Then an important technical result establishes that a "rational" individual always acts like an individual maximizing the probability of exceeding some uncertain threshold, *T*. This individual's risk attitude is represented by specifying some distribution over *T*. (This result also assumes that there is some random variable *Z* such that *X - Z* and *T - Z* are independent. In other words, if there is correlation between *X* and *T*, that correlation can be attributed to a single common uncertainty, *Z*, which can be subtracted off from *X* and *T*.)

In the oil drilling example, the individual was only concerned with monetary payoffs, but in many actual problems, there are a wide variety of criteria that are important. To approach such problems, we need to draw upon techniques from reliability theory.

At this point, the instructor needs to spend time teaching fault tree analysis and reliability theory. Once the students are reasonably familiar with these techniques, the instructor can then return to the decision problem and begin the transition described in Section 3.5.

We now apply these concepts from reliability theory to our decision theory problem.

Suppose the individual has a list of goals that are important in a decision problem. For each goal, the individual needs to ask "Why is that goal important?" in order to identify that a higher-level goal. This process is continued until one ultimate goal is identified. At this point the investigator may build a fault tree using the following steps:

- Specify achievement of the ultimate goal as the top-event in a fault tree.
- Identify those first-level goals which, if achieved, would directly lead to the achievement of this ultimate goal.
- For each of these first-level goals, identify second-level goals which, if achieved, would directly lead to the achievement of that first-level goal.
- Continue, using standard fault-tree approaches, to connect lower-level goals to higher-level goals. Stop when the lowest level goals are fairly concrete.
- For each of your possible consequences
*x*, assess the probability of reaching each of the lowest level goals. - Infer the resulting overall probability for meeting the ultimate goal for each of your possible consequences,
*x*.

If the consequences of your decisions are uncertain, then choose that decision which has the highest probability of meeting your long-run goal.

The previous section showed how the target-based approach allowed us to cover decision theory fairly quickly. This section discusses an even more important benefit of integrating decision theory in the curriculum. The success of decision risk analysis in industry is based on its integration into a structured process for decision making called the decision dialogue process. This process is often advertised in consultant brochures as "a practical team-based approach to making tough decisions."

The process involves having the key executives form a "decision review board" which, because of executive time commitments, meets infrequently. Simultaneously, a core team, composed of delegates representing the executives, is formed to work more continuously on the problem. Once formed, the core team:

- Defines the problem and specifies the problem criteria and then reports back to the decision review board for their comment and approval.
- Generates possible solutions and meets with the review board for their comment and approval. The core team then develops a model to analyze each alternative. The team uses the analysis to identify a small number of critical key factors and reports the results back to the review board. This model is used to identify needs for further information.
- Develops a detailed model based on these key factors. The core team then meets with the review board to develop a consensus recommendation and to plan implementation.

Now suppose that Hogg's idealized curriculum and the dialogue decision process are integrated by organizing the curriculum around the following three modules:

- Defining decision criteria
In this module, the instructor can focus on specifying the decision criteria in a practical problem. In order to understand the decision criteria, the instructor teaches the material described in our previous section. This will also involve teaching probability theory and reliability theory.

- Defining the critical factors impacting those criteria
This step involves filtering the myriad variables in the decision problem to identify those few that really matter. Montgomery (1991), Coleman and Montgomery (1993), Box, Hunter, and Hunter (1978), Hahn (1977,1984), Natrella (1974), and Lorenzen and Andersen (1993) have developed a process for implementing this step based on the design of experiments. Their process presumes knowledge of data collection, statistical analysis and the design of experiments. Hence the module corresponding to this step should teach data collection, statistical analysis and the design of experiments.

- Analyzing possible solutions
This last step focuses on carefully modeling the problem using only the critical factors. Because a variety of key factors are deliberately omitted, developing this high-level model may involve constructing a regression model with the omitted variables being implicitly subsumed in the error term. Hence the instructor needs to teach regression analysis during this module.

Since this three-step process is focused around making a decision, the instructor might also wish to interject guidelines for effective statistical consulting throughout the course. Ideally the instructor would motivate the course by providing students with a single case study, which is then solved in the process of teaching the three modules.

General Motors has an internal course focused on a recently transferred GM employee who needs to decide how to sell his house. This problem naturally involves multiple criteria, which can be structured using reliability theory. A variety of factors impact these criteria and design of experiments is required in order to identify the truly critical factors. Finally a high-level model is constructed based only on the most critical factors.

Note that the three modules proposed in this course -- a probabilistic module, a statistical module and a modeling module -- involve teaching all six elements in Hogg's idealized curriculum. Hence the integration of decision theory and reliability theory allows the instructor to teach Hogg's entire curriculum in a course focused around a practical decision process widely used in many large corporations.

My experience with teaching this proposed new approach is based on:

- Regular semester (advanced undergraduate) engineering statistics courses at Oakland University.
- Intensive one-week engineering statistics courses at Austrian Technical University. As Birch (1995) noted, the teaching of such courses often involves a diverse audience with one common goal, "to achieve their educational objectives as quickly as possible and to begin using their new skills immediately." Hence it is a different audience than is typically encountered in regular semester classes.
- In-house continuing education classes to GM employees and executives. This audience is very pragmatic and has little tolerance for concepts that are not obviously applicable.

Teaching these three kinds of classes has given me significant experience on the kinds of demands associated with different teaching formats. It has also allowed me to experiment with different ways of teaching decision theory.

When I began teaching, I employed the standard approach to decision theory. As the introduction noted, this approach focuses first on teaching expected value, then on pointing out the limitations of expected value, and then on teaching expected utility. In several of my experiences teaching this traditional approach, a few of my students openly wondered why I was wasting their time teaching expected value if it was a flawed approach.

I would then go on to teach expected utility. However, the amount of class time remaining to teach expected utility was relatively short. While the students did seem to enjoy my discussions of multiattribute utility, there was not enough time in this introductory course to make them fully comfortable with the concept. It seemed clear that few, if any of my students, would actually solve any of their own problems by assessing utility functions.

The fact that I was teaching material which the students would almost certainly never apply was obviously of considerable concern. As Romero et al. (1995, p. 14) wrote:

"What students really learn has little to do with what they do by heart or in a random way or on an examination after days devoted to intensively studying the subject. What they actually learn is what they are able to apply in their jobs ten years later."

From that perspective, my students were not learning much. Of course, one might argue that students would be able to apply these concepts if they also took a follow-up advanced class in utility theory; but like most universities, my college had little interest in holding such a class.

As a result, I began tentatively teaching decision theory using first the target-based approach and then the standard utility-based approach.

To do this, I would teach decision trees and then focus on the problem of an individual considering several job offers where each job offer was considered in terms of money, social life, and sports activities. I would ask the students to suppose that the individual would only be satisfied if they were satisfied in each of these three areas of life. I would then have them work through the probability of the individual being satisfied with the various job prospects available. The students had little problem understanding this formulation and easily worked through the case study with minimal involvement from me. Indeed, I had several lunchtime experiences helping students solve decision analysis problems from their workplace by making notes on napkins!

Five weeks later, I had the students go through a related problem using the multiattribute utility approach. I spent one day teaching multiattribute utility theory and the assessment techniques and the second day applying it to the case study. In this second approach to multiattribute utility theory, I spent considerably more time leading the class through the case. It was clear that the students could not work through the case study without considerable involvement from me. None of the students ever discussed applying this formulation to problems in their workplace.

The fact that my students could apply the target-based approach with minimal involvement from me was especially appealing and indicated that they would leave the class with a technique which they could use.

This emphasis on leaving students with techniques they can immediately use was even more important in my internal classes at General Motors. There were many GM project teams who wished to apply decision risk analysis. These teams were often assigned decision analysis facilitators and were also asked to take a short course in decision risk analysis prior to actually using decision risk analysis to make the decision (an example of "just-in-time" education). We initially tried teaching such utility-based concepts as "willingness to pay" which focuses on an individual's hypothetical willingness to pay for improvements along various attributes. Thus in one application, we had individuals assess their willingness to pay for a one percent improvement in fuel economy, their willingness to pay for a one percent improvement in quality, etc. We did, in fact, complete many projects using these kinds of notions.

We continued to encounter resistance from individuals accustomed to setting targets and then focusing efforts to meet those targets. For example, in GM's Powertrain Planning groups, individuals preferred to use future envisioning exercises to set long-run corporate fuel economy targets. They would then evaluate various decisions in light of their contribution toward meeting those targets. This target-based approach to decision-making was quite entrenched in the organization.

We did spend considerable time trying to convince people to abandon target-based approaches for utility-based approaches. But since there are no degrees offered in internal GM University classes, the professor's only control over the students is their belief that what is being taught will help them with their work. As a result, we eventually stopped trying to replace the target-oriented approach and focused more on making that target-oriented way of thinking more valid.

Were my experiences unique? A review of the literature on organizational decision making shows that this kind of target-based approach to decision making is common to most bureaucratic organizations. Indeed Herb Simon earned the Nobel Prize, in part, for his theory of bounded rationality, which emphasized that individual decision-making was based on looking for alternatives that would meet pre-specified targets. As Simon (1978, p. 10) noted,

"Research in information processing psychology provides conclusive evidence that the decision-making process in problem situations conforms closely to the models of bounded rationality."

The prevalence of target-seeking behavior in organizations reinforces my belief that the target-oriented approach to teaching decision risk analysis is much more natural for those who teach applied statistics to engineers.

This paper presents an alternate way to teach decision theory. This approach has several potential advantages:

- Formally teaching the concept of utility requires a discussion of reference lotteries and the principle of maximizing expected utility. In the proposed approach, we eliminate this entire discussion and replace it with the idea of maximizing the probability of achieving a goal. This notion is familiar to many statistics students.
- Multiattribute utility theory is frequently taught using tools and techniques specific to decision theory. In the proposed approach, it is taught as an application of reliability theory, which most students must learn anyway.
- This approach to decision theory draws heavily upon existing elements in the statistical curriculum with minimal introduction of non-statistical concepts. Hence it may be easier to incorporate into a statistics class.
- The new approach allows the instructor to organize all of the elements in Hogg's idealized curriculum as part of a single coherent process for making decisions. This enables the instructor to give a unifying theme to the survey course in applied statistics.

In the spirit of Barabba's vision, this paper, and the new approach it describes, show how decision theory can be blended with other statistical tools in a way that is extremely helpful to high-level decision making.

Barabba, V. P. (1991), "Through a Glass Less Darkly," *Journal of the American Statistical Association*, 86, 1-8.

Bisgaard, S. (1991), "Teaching Statistics to Engineers," *The American Statistician*, 45, 274-283.

Birch, J. B. (1995), "Ten Suggestions for Effectively Teaching Short Courses to Heterogeneous Groups." *The American Statistician*, 49, 190-195.

Bordley, R. F. (1998), "R&D Project Generation versus R&D Project Selection," *IEEE Transactions in Engineering Management*, 45, 407-413.

Bordley, R. F., and LiCalzi, M. (2000), "Decision Analysis Using Targets instead of Utility Functions," *Decisions in Economics and Finance*, 23, 53-74.

Box, G. E. P., Hunter, W. G., and Hunter, J. S. (1978), *Statistics for Experimenters*, New York: John Wiley and Sons, Inc.

Castagnoli, E., and LiCalzi, M. (1996), "Expected Utility without Utility," *Theory and Decision*, 41, 281-301.

Chelst, K. (1998), "Can't See the Forest Because of the Decision Trees: A Critique of Decision Analysis in Survey Texts," *Interfaces*, 28, 80-98.

Coleman, D. E., and Montgomery, D. C. (1993), "A Systematic Approach to Planning for a Designed Industrial Experiment," *Technometrics*, 35, 1-12.

Gould, F. J., Eppen, G. D., and Schmidt, C. P. (1991), *Introductory Management Science*, Englewood Cliffs, NJ: Prentice Hall.

Hahn, G. J. (1977), "Some Things Engineers Should Know About Experimental Design," *Journal of Quality Technology*, 9, 13-20.

----- (1984), "Experimental Design in a Complex World," *Technometrics*, 26, 19-31.

Hahn, G. J., and Hoerl, R. (1998), "Key Challenges for Statisticians in Business and Industry (with discussion)," *Technometrics*, 40, 195-213.

Hillier, F. S., and Lieberman, G. J. (1990), *Introduction to Operations Research* (5th ed.), New York: McGraw-Hill.

Hoerl, R. W., Hooper, J. H., Jacobs, P. J., and Lucas, J. M. (1993), "Skills for Industrial Statisticians to Survive and Prosper in the Emerging Quality Environment," *The American Statistician*, 47, 280-291.

Hogg, R. V. (1985), "Statistical Education for Engineers: An Initial Task Force Report," *The American Statistician*, 39, 168-175.

----- (1991), "Statistical Education: Improvements are Badly Needed," *The American Statistician*, 45, 342-343.

Howard, R. A. (1988) "Decision Analysis: Practice and Promise," *Management Science*, 34, 679-695.

Keeney, R., and Raiffa, H. (1976), *Decision Analysis with Multiple Objectives*, New York: John Wiley and Sons, Inc.

Kusnic, M. W., and Owen, D. (1992), "The Unifying Vision Process: Value Beyond Traditional Analyses in Multiple Decision Maker Environments," *Interfaces*, 22, 150-166.

Levine, D., Berenson, M., and Stephan, D. (1998), *Statistics for Managers*, Englewood Cliffs, NJ: Prentice Hall.

Lorenzen, T. J., and Anderson, V. L. (1993), *Design of Experiments*, New York: Marcel Dekker.

Montgomery, D. C. (1991), *Design and Analysis of Experiments* (3rd ed.), New York: John Wiley and Sons, Inc.

Natrella, M. G. (1974), "Design and Analysis of Experiments," *Quality Control Handbook* (3rd ed.), Juran, J. M. (ed.), New York: McGraw-Hill, 27-35.

Ragsdale, C. (1995), *Spreadsheet Modeling and Decision Analysis* (2nd ed.), Cincinnati: Southwestern College Publishing.

Romero, R., Ferrer, A., Capilla, C., Zunica, L., Balasch, S., Serra, V., and Alcover, R. (1995), "Teaching Statistics to Engineers: An Innovative Pedagogical Experience," *Journal of Statistics Education* [Online], 3(1). (http://www.amstat.org/publications/jse/v3n1/romero.html)

Simon, H. A. (1978), "Rationality as Process and as Product of Thought," *American Economic Review*, 68, 1-16.

Smith, G. (1998), "Learning Statistics by Doing Statistics," *Journal of Statistics Education* [Online], 6(3). (http://www.amstat.org/publications/jse/v6n3/smith.html)

Snee, R. (1993), "What's Missing in Statistical Education?" *The American Statistician*, 47, 149-154.

Winston, W. L. (1994), *Operations Research: Applications and Algorithms* (3rd. ed.), Belmont, CA: Wadsworth Publishing.

Robert F. Bordley

GM University

MC 482-D20-B24

Renaissance Center

P.O. Box 100

Detroit, MI 48265-1000

Volume 9 (2001) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications