Customized Content Delivery for Graduate Management Education: Application to Business Statistics

Owen P. Hall, Jr., and Ken Ko
Pepperdine University

Journal of Statistics Education Volume 16, Number 3 (2008), www.amstat.org/publications/jse/v16n3/hall.html

Copyright © 2008 by Owen P. Hall, Jr., and Ken Ko all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of the editor.


Key Words: Residential MBA programs; Learning support systems; Intelligent agents; Neural nets; Business statistics.

Abstract

Globalization is bringing about a radical "rethink" regarding the delivery of graduate management education. Today, many students entering a residential MBA program do not possess an undergraduate degree in business. As a result, many business schools are increasingly turning to the Internet to provide "customized" instructional content to ensure that students can remain competitive throughout the program. The purpose of this paper is threefold: 1) to estimate student performance in a residential MBA program; 2) to outline a process for identifying specific learning support resources based on student backgrounds and capabilities; and 3) to illustrate the screening process in providing business statistics support content to students requiring additional preparation. The results show that neural net based classification techniques can effectively identify students for the purpose of providing additional learning resources. Business statistics is one area in which this screening process has been used to deliver specialized content to students with a variety of backgrounds enrolled in a MBA residential program.

1. Introduction

Many students enrolling into a residential MBA program are often challenged with the competitive rigors of graduate school. MBA level business statistics is one particular discipline where students often face significant learning challenges. This can be attributed to a number of factors including 1) lack of an undergraduate business degree, 2) lack of significant work experience and 3) time elapsed since graduation. In response many business schools are providing a variety of support systems to address these challenges. Schools of business also recognize that the complexities and interrelated nature of modern business call for an integrated as well as results oriented learning framework for graduate management education (Camuffo, 2004). Furthermore, it has long been recognized that active, participatory and systematic learning is more effective than passive learning and that learners need feedback early and often (Atwater, 2006). One learning stratagem that supports this perspective is the Instructional Management System (IMS) cooperative initiative (Graves, 1999). This initiative is designed to promote systematic thinking regarding the delivery of higher education, to improve learning outcomes and to increase return on instruction investments. Specific principles of the IMS initiative are: 1) education involves more than a single course; 2) a course is more than content; 3) content is more important than lecture notes; 4) convenience is a priority and 5) quality assurance requires an integrated learning approach. Web-based learning support systems are the embodiment of the IMS initiative. These systems are receiving increased attention because they provide the capability to deliver customized content as well as develop specializations to match student career goal aspirations (Kathawala, 2002).

A second initiative that supports customized knowledge transfer is the E-Learning Success Model (Delone, 2003). An overview of the model is highlighted in Figure 1.

Figure 1 – E-Learning Success Model

This paradigm consists of three distinct but interconnected phases: system design, system delivery and system outcomes. Each phase consists of a number of specific performance metrics. For example, service quality can be measured using availability, reliability and response time. Assessment rubrics can be used for evaluating each performance metric. The model’s architecture suggests that the overall effectiveness of e-learning depends on the attainment of success at each of three stages as well as in the aggregate. The Delone paradigm was evaluated in an undergraduate quantitative methods core course. The results from this assessment underscore the efficacy of the paradigm as an effective learning platform (Holsapple, 2006).

The basic approach of the screening process is to identify students entering an MBA program who are likely to be marginal performers. Internet-based artificial agents then seek to identify these students’ areas of weakness and provide them with the appropriate learning support resources. The ultimate goal is to aid these students to succeed. This paper consists of three parts: 1) a review of the relevant literature and a brief overview of neural nets; 2) a statistical analysis of an MBA residential program and 3) an application of the screening paradigm to business statistics.

2. Background and Literature

Today, the general pedagogical direction in management education is moving increasingly towards a learning-centric perspective (Driver, 2002). Accordingly, the roadmap to effective learning is a flexible and customized curriculum. This perspective is predicated on the fact that students entering an MBA program do not have the same technical, academic or work experience backgrounds. Therefore, providing self-paced "customized" content, as part of the overall curriculum design, can further enhance the learning experience. Furthermore, students tend to participate more in learning systems that are content-rich and that feature extensive variety (Neo, 2004).

The literature is replete with studies on estimating student performance in a MBA program (Hancock, 1999; Clayton, 2004). The primary focus of these studies has been on admissions screening. Generally, the Graduate Management Admission Test (GMAT) and undergraduate grade point average (GPA) have been found to be correlated with successful performance in residential MBA programs (Hedlund, 2006; Sireci, 2006). Many of the analytical admission studies have used classical regression techniques yielding R-squares in the 0.20 to 0.40 range (Yang, 2000). Classification analysis which provides for a more direct identification of student performance potential has been shown to be somewhat more efficient in estimating student performance than classical regression techniques (Wilson, 1995).

These screening models can also be used to identify learning support resources for matriculating students. This capability has been brought about by the integration of improved screening models like neural nets and customized content delivery systems. Customized asynchronous learning systems provide a practical environment for competency development. Once the screening system has classified the student as a good candidate for additional learning support, the specific resource learning package can be identified and delivered. Using customized content and tutorials, which also can be delivered via the Internet, students can control the learning pace which in turn should help ameliorate the frustration of "falling behind." Figure 2 illustrates how the screening paradigm could be used for providing learning support resources.

Figure 2 – Learning Support System Design Concept

This learning support system consists of the following three basic components.

Internet-based content delivery and instruction has been found to be increasingly effective in providing management education (Lundgren, 2003; Shih, 2003; H. Huang, 2002). For students requiring remedial instruction the learning process can take place at a convenient time and place. Furthermore, offering support content via the Internet provides students with a purposeful entry to a wide variety of online resources and to a new era of learning technologies. This approach could be part of an overall blended (hybrid) learning environment that combines the best practices of both traditional and web-based instruction (Hwang, 2006).

The identification and delivery of customized remedial content via the Internet can be facilitated though the use of so-called intelligent tutors. These artificial intelligence based systems are defined as functional autonomous entities capable of adapting to changing demands such as those found in many unstructured and semi-structured educational applications. Intelligent agents allow the active reconfiguration of the learning presentation according to current requirements and the availability of information sources of varying quality. Typically, intelligent tutors should possess the following four basic characteristics: autonomous, proactive, flexible and user-friendly. The "social" interface between the agent and the student should be highly visual with limited user required inputs. It is within this design context that the learning objectives can be best achieved. For example, if a student is having difficulty mastering statistical forecasting, as detected by testing, simulation, or self-assessment, then the synthetic agent would identify and provide specific additional learning content to the student via the Internet. The content delivery modes could include streaming videos, virtual seminars and simulations. Intelligent tutors used to facilitate these learning modes have received increased attention throughout academe with particular emphasis on management education (Ghaoui, 2003; Cheung, 2003).

3. Neural Nets

Artificial Neural Networks (NNs), another branch of artificial intelligence, are designed to address the problem of forecasting complex patterns by simulating the biological neural network found in the human brain. Representing a relatively new approach to predictive modeling, neural networks have been characterized as "computing devices that use design principles similar to the information processing system of the human brain" (Bharath, 1994). NNs use complex network relationships to mimic the connections between sets of data. Among other things, NNs have the advantage of not requiring prior assumptions about possible relations, as is the case with traditional analysis methods, e.g., linear regression. In the most common schema, each neuron in one layer is connected to each neuron in the preceding layer as is illustrated in Figure 3. In this example, the prediction of GPA at time of graduation in a MBA program is derived as a function of input states and a set of weights. The specific input states in this illustration are the following: 1) GMAT score, 2) Type of undergraduate degree and 3) Prior work experience (years). The values for the input states may come from the activation of other neurons or specific environmental factors. The example numerical value inside the node represents the threshold value for firing or activating the neuron. In this case, if the sum of the weights exceeds 1.5 the neuron is "fired" which suggests a certain level of change in GPA. The values for the weights and thresholds are determined through an iterative process with the goal of minimizing the aggregate error.

Figure 3 – Example Predictive Neural Node

The architecture of an NN consists, at a minimum, of three layers: an input neuron or neuron layer, a "hidden" layer and an output neuron. There may also be one or more intermediate or "hidden" layers of neurons. There are basically two types of NN arrangements:

Neural net models, like multiple regression, are impacted by degrees of freedom. In some instances adding more hidden layers can increase the degrees of freedom for a given database. Neural networks have seen increased use in educational applications (Choi, 2001). Specifically, neural nets appear as the analytical tool of choice when the underlying relationships between variables are somewhat ill-defined as in the case of learning (Gonzalez, 2002). Neural nets are often used for classification analysis wherein the target variable is characterized into two or more categories. An example neural net classification application is college admissions in which the target variable categories could consist of accept, reject or hold for further consideration (Hoefer, 2000). The following section presents an overview of the database and some preliminary statistical findings.

4. Database

The database consisted of files on students who graduated from an Association to Advance Collegiate Schools of Business (AACSB) accredited residential MBA program over the period 2001 to 2004. The sample size was 412. Table 1 provides descriptive statistics for the variables used in this study. The data show that slightly less than 50 percent of the students had an undergraduate business or economics degree and that 31 percent came from ranked undergraduate schools. The incoming GPA data was adjusted for grade inflation. Undergraduate grade inflation has been averaging approximately 0.015 per year over the past 30 years (Kezim, 2005). The difference between the average "unadjusted" and "adjusted" incoming GPA is 0.06. The outgoing GPA was not adjusted since the average over the timeframe of the database had not changed. The following four qualitative variables were also included in the database.

These factors could offer the opportunity for more detailed student screening.

Table 1 - Cohort Group Descriptive Statistics

Variable

Mnemonic

Mean

S. Dev.

Entering Age (yrs.)

Age

27.2

3.37

Gender (1= female)

Gender

0.35

-

Undergraduate Business/Econ Degree (1=yes)

UBED

0.47

-

Undergraduate GPA *

UGPAA

3.11

0.44

Business Experience (yrs.)

EXP

3.83

2.69

GMAT Score

GMAT

617

61

Standard Acceptance Process (1=yes)

ACC

0.92

-

College Ranked (1=yes)

Ranked

0.31

-

MBA Program Type (1= standard program)

Type

0.78

-

Fast Track (1 = 15 mos. or less)

Fast

0.21

-

Graduate GPA

GGPA

3.53

0.21

* Adjusted for grade inflation.

 

Table 2 presents zero-order correlation coefficients for the continuous variables. The correlation data revealed the following statistically significant relationships with GGPA.

·        GMAT (r=0.25, p=0.00)    ● UGPAA (r=0.36, p=0.00)

·        EXP (r=0.22, p=0.00)  ● Age (r=0.17, p=0.00)

Table 2 - Correlation Matrix (Pearson)

Variables

1

2

3

4

5

1. Age

-

 

 

 

 

2. UGPAA

.01

-

 

 

 

3. EXP

.81*

.03

-

 

 

4. GMAT

-.06

.14*

-.03

-

 

5. GGPA

.17*

.36*

.22*

.25*

-

*Significant at 0.05



UGPAA had the largest positive correlation with GGPA. Table 3 presents GGPA means for the categorical factors (Gender, UBED, ACC, Ranked, Type and Fast). The nature of the acceptance process (ACC), the type of program (Type), and duration of the program (Fast) were found to be statistically significant at the 0.05 level.

Table 3 – Comparison of GGPA Means for Categorical Factors

Gender

UBED

ACC*

Ranked

Std. Type*

Fast*

M

F

Yes

No

Yes

No

Yes

No

Yes

No

Yes

No

3.53

3.53

3.52

3.54

3.54

3.46

3.55

3.52

3.55

3.47

3.60

3.51

*Significant at 0.05



Figure 4 shows a scatter plot of GGPA versus UGPAA.

Figure 4 – Plot of UGPAA versus GGPA (r=0.36, p=0.00)

 

More specifically, the data show students admitted under "special" circumstances tended to "under-perform" compared to students admitted under the standard acceptance process (p=0.03). Additionally, students enrolled in fast track programs (15 months or less) outperformed those students in the standard, 20-month program (p=0.00). Lastly, the rank of the student’s undergraduate degree program was not linearly correlated with GGPA (p=0.25). The results of a more detailed analysis of the database are highlighted in the following section.

5. Results Analyses

The database was more fully explored using a feed-forward neural net (NN) and logistic regression analysis (Wald forward procedure). These are only two of many classifying techniques. The primary goal of this exercise was to build a reasonably effective classifier for demonstrating the two-stage delivery process and not to evaluate a broad range of classifiers. The neural net model consisted of one input, one hidden and one output layer. A dichotomous categorical variable based on student GPA at time of graduation was used. The two categories were marginal, defined as a student with a GPA less than 3.3, and successful, defined as a student with a GPA of 3.3 or greater (Hardgrave, 1994). The division yielded approximately 20 percent of the student records classified as marginal and the remaining 80 percent classified as successful. The large ratio between successful and marginal (i.e., four-to-one) tends to complicate the analytical screening process. This preliminary analysis included the entire database (N= 412). The results of the screening analysis are highlighted in Table 4 and suggest that the neural net model "outperformed" the logistic regression model. This was especially the case for correctly classifying marginal students. The overall accuracy rate of the neural net is 89 percent compared to 85 percent for the logistic regression. This statistic is defined as the proportion of correct classifications across all categories.

Table 4 – Comparison of Classification Analysis of Target Variable (%)

 

Neural Net

Logistic

 

Classified  Marginal

Classified  Successful

Classified Marginal

Classified  Successful

Actual Marginal

100

0

20

80

Actual Successful

14

86

3

97


Neural nets have previously been used for predicting MBA student success (Naik, 2004). This NN model included 10 explanatory variables and a database consisting of 184 observations. The results indicated that the NN model outperformed both the logit and probit models based on the same classification scheme used in the present study. The results reported in Table 4 are consistent with the Naik findings. In terms of classifying errors a Type II error in this case is defined as classifying a student successful who actually was marginal. A Type I error would be classifying a student marginal who actually was successful. In the context of this application a Type II error would be viewed as more serious than a Type I error. Since the purpose of the proposed screening process is to identify marginal students, reducing Type II errors is essential.

Table 5 shows the relative importance of the predictor variables for the two modeling approaches where GGPA is the classification variable (Marginal and Successful). The predictor variables are ranked based on the neural net (NN) model importance factors (IP). The neural net classifier ranked undergraduate GPA as the most "important" factor. The logistic model uses the odds ratio as an indication of the relative importance of the statistically significant predictor variables. For example, a 10 point increase in GMAT (the minimum increment) improves the odds of being admitted into the MBA program by a factor of 1.1. GMAT was also ranked last in terms of relative importance using the neural net model.

Table 5 – Analysis of Variable Relative Importance

Variable

NN(IP)

Logistic (Odds Ratio)*

UGPAA

0.43

7.79

ACC

0.24

-

EXP

0.08

1.22

Type

0.03

2.09

GMAT

0.01

1.01

*Statistically significant using Wald test @ 0.05

One additional analytical step in assessing the efficacy of the screening process would be to construct a hold-out group from the database. Out-of-sample assessment is a standard method for testing the forecasting accuracy of the in-sample based screening model. However, the relatively small size of the database precluded a meaningful evaluation. Typically, sample sizes on the order of 1,000 or more with a hold-out group of 25 percent is the minimum database requirement for "testing" the model especially if there are multiple categories (Nguyen, 2001).

6. Applications to Business Statistics

The screening process outlined above will now be illustrated for providing customized business statistics content. A similar line can be followed for other preparatory courses such as accounting, economics and information systems. Many students entering an MBA program without a background in quantitative reasoning, in general, and statistical analysis, in particular, often have difficulty in "keeping up" at the graduate level. Even those students who have had an undergraduate statistics course may need a refresher due to the "technical nature" of the material or the elapsed time since taking the course. Interestingly, many "new" MBA curricula have reduced the number of credit hours for quantitative-type courses in favor of more general leadership themes (e.g., ethics) as well as electives (Kleiman, 2007; Richards-Wilson, 2006; Bennis, 2005; Mangan, 2003). Along these lines, the call for more relevancy in teaching business statistics has a long pedigree (McAlevey, 2001). Educators are well aware of these requirements and have been active in developing strategies to provide business statistics with a more practical emphasis (Lawrence, 2004). For example, using an interactive scatter plot where the student draws lines and estimates correlation coefficients represents a personalized approach for understanding the nature and characteristics of simple linear regression. Internet-based learning systems have been shown to not only help students appreciate basic business statistical concepts but also to help develop real-world problem solving skills (Su, 2000). Some typical business statistics background requirements for MBA students include:

In the screening processes outlined in this paper, once a student has been identified as requiring additional business statistics learning assistance the student is directed to the Harvard Quantitative Analysis: An Introductory Course. This Internet-based course is taught through a case study format. The student is assigned as a consultant working on project for a resort hotel in Hawaii. Specific topics include: data description, sampling & estimation, hypothesis testing, simple regression, multiple regression, and decision analysis (decision trees, sensitivity analysis, the value of information, and risk analysis). The course incorporates Excel analysis and takes about 30 hours to complete. Approximately 25 percent of the incoming class was assigned to the preparatory course prior to the beginning of the fall term. The results show that there was no difference in final grades in subsequent core statistical and quantitative courses between students that took the required preparatory class and students that did not (p=0.35, N = 161). This, of course, is an encouraging outcome.

Providing graduate level business statistics know-how in an online environment offers many challenges including a robust learning context and ongoing assessment (Grandzol, 2004). Nevertheless, many students who studied business statistics online evaluate this learning venue as at least as effective as traditional learning (Dutton, 2005; Suanpang, 2005). The data also shows that student persistence and not the instructional format is the key to learning success in business statistics (McLaren, 2004). Table 6 provides some examples of different business statistics content support formats and delivery options via the Internet. One of the learning requirements for MBA programs is for students to understand how spreadsheets can be used to support statistical analysis. Excel-based demonstrations offer a convenient approach to learn a variety of business statistics principles such as standard deviation and regression. These demonstrations can be implemented using a combination of PowerPoint slide shows presentations, Excel templates and experiments (Johnson, 2003). In addition to these basic capabilities the web is replete with more extensive statistical learning venues such as, interactive scatter plots, regression simulations and control charts (Mulekar, 2000).

Table 6 – Examples of Business Statistics Internet-Based Content Support

Topic

Content Focus (1st level)

Example Delivery Mode

Statistics Overview

Role of Statistics in Business

Streaming Video

Descriptive Statistics

Graphical Presentations

Excel Templates

Database Acquisition

Database Design

Expert System

Time Series

Forecasting

Bureau of Labor Statistics

Variable Relationships

Scatter Diagrams/Regression Line

Web-based Simulation

Business statistics simulation models also provide the student with the context to develop analytical thinking in a realistic problem-solving environment. Evidence has shown that students engaged in simulations retain considerably more material than through traditional instructional methods, such as lectures (Raffle, 2005; Johne, 2003). In a simulation learning environment, students are able to work with spreadsheets that are linked to the larger business decision-making process, allowing students to develop higher order cognitive skills. The content delivery mode components outlined in Table 6 are not mutually exclusive and can be used in combination.

One way to "manage" the process of content identification and delivery is through the use of intelligent tutors (Graesser, 2005). Expert Systems based tutors can be used to assist students in analyzing statistics based cases. For example, in a case involving a simple regression analysis of vehicle mileage, this knowledge-based system would use criteria to determine the type of opinion that the student should render after conducting an analysis of the case. The consultation process consists of a series of interconnected questions as illustrated with the following examples.

The expert system prompts the student to identify the target variable for the case. If the target variable is not identified the expert system provides feedback. Next the student would be asked to identify the predictor variable. Again, feedback is provided for an incorrect response. The consultation dialogue then turns to an interpretation of an attached scatter diagram. The student reviews a computer analysis of the database and is asked to interpret the results. Once the student has correctly completed the consultation the expert system provides an overall learning assessment and a PowerPoint slide show case review presentation. Figure 5 shows a sample case and consultation for the simple regression analysis learning system used in this business application. The student is guided through a series of prompts regarding the case and explanations or clues are provided for each prompt. The consultation can be taken more that once since, among other things, some of the prompts are randomized. The use of this expert system is not limited to entering students requiring preparatory work but can also be used as a refresher for students over the course of studies. For example, a student in a finance course that is struggling with the notion of risk betas could be directed to a consultative system showing how simple regression is used in the computation of risk betas.

Simple Regression Case

The Murphy Company, an independent consulting firm specializing in used car analysis, has been asked to evaluate trends in used compact vehicle pricing by the National Vehicle Association (NVA). The NVA is interested in calibrating marketplace pricing with specific vehicle factors, such as vehicle age and condition. Murphy’s project engineer for the upcoming analysis has identified the age of the vehicle as the primary variable for predicting price. Sample data on 13 vehicles equipped with front wheel drive was collected. The age of the vehicles ranged between one and four years. The project engineer wishes to discover if vehicle age is linearly related to price at a 5 percent level. The engineer also wishes to know how much of the variability in price is explained by vehicle age.

Figure 5 - Sample Expert System Case and Consultation (Regression)

In addition to the use of expert systems for business statistics, as outlined above, this type of learning assessment approach has also been used successfully in the field of accounting (McDuffie, 2006). Specifically, an auditing expert system was constructed to assist students in understanding and applying GAAP (generally accepted accounting principles). The results showed that students who used this system performed better on course examinations. Expert Systems have also been employed to assist students in mastering database design (Post, 2005). Specifically, the system provides the student with the capability to design databases and receive feedback in real time. Another agent, AutoTutor, engages in conversation with the student using three-dimensional interactive simulations (Graesser, 2005). This system has demonstrated a nearly one letter grade improvement in the learning process.

The screening process outline in this paper is being implemented to assist students who require remedial statistical preparation in MBA programs. The current screening factors include GMAT scores and undergraduate GPA. The GMAT quantitative score, in particular, provides insight into a student’s statistical reasoning ability. Entering students who are identified as requiring additional statistical preparation via the screening process are assigned to a one-week intensive statistics bootcamp prior to the start of the fall term. Approximately, 25 percent of the students in the entering class are assigned to the boot camp. These students are exposed to a number of the learning support systems including the Expert System tutorials. The screening and resource support process is also used in the subsequent required core statistics class as well as classes that use statistical analysis (e.g., finance). In these applications student "weaknesses" are identified though the screening model as well as through testing and self-assessment. Specific statistical content material is then made available via the Internet to help correct specific student deficiencies (e.g., regression). In summary, the holistic approach of identifying and assisting marginal students consists of the following three elements: 1) identifying marginal students before they enter the program, 2) determining a student’s specific statistical analysis weakness, and 3) providing customized content to address these issues. Presently a database is being developed for capturing student performance to further demonstrate the efficacy of this screening paradigm.

7. Conclusions

Many students entering residential MBA programs do not have an undergraduate degree in business or have been away from the classroom for some time. These conditions suggest the need for specialized learning systems to assist the student over the course of the program. The purpose of this paper is threefold: 1) to estimate student performance in a residential MBA program; 2) to outline a process for identifying specific learning support resources based on student backgrounds and capabilities; and 3) to illustrate the screening process in providing business statistics support content to students requiring additional preparation. An analysis of 412 alumni records for the period 2001 to 2004 shows that GMAT scores, incoming undergraduate GPA and work experience are useful in helping identify potentially marginal students entering residential MBA programs. The developed neural net screening model yielded an overall classification record of 89 percent. The capability to identify marginal students opens up the opportunity to provide specialized learning support material. Additional work is planned to upgrade both the screening process and the database to enhance the delivery capability.

Web-centric learning support systems, like the ones outlined in this paper, are particularly attractive for assisting non-business majors in residential MBA programs. Specifically, enhanced interactive simulations and spreadsheet demonstrations offer the student a customized learning experience and a gateway for effectively applying statistical analysis throughout the MBA curriculum. Intelligent agents are one promising technology to support the customization of the learning process in management education. These tutorial systems offer learning content based on student test performance and background characteristics. Similar systems are being used throughout industry to improve both productivity and effectiveness and can play a role in enhancing student learning. The screening process outlined in this paper was used to identify entering students that required additional preparation in business statistics principles. The results show that there was no difference in final grades in subsequent core statistical and quantitative courses between students that took the required preparatory class and students that did not. The expert consultation system, introduced above, was used in both the preparatory statistics class as well as with students needing a refresher in business statistics for subsequent courses (e.g., decision sciences and finance).

In summary, a mastery of basic statistics is essential for success in MBA programs. To improve students' success in such programs, it is important to identify those who are likely to be marginal performers, determine the specific deficiencies in their undergraduate training, and address these deficiencies at the beginning of their graduate careers. Providing customized content via the Internet is one way to correct student deficiencies in statistics. The process of identifying students with deficiencies and solutions to rectify these deficiencies will be of interest to statistics educators.

References

Atwater, J.; Pittman, P. 2006. Facilitating systemic thinking in business classes, Decision Sciences Journal of Innovative Education, 4(2), 273.

Bennis, W; O’Toole, J. 2005. How business schools lost their way. Harvard Business Review, 83(5), 96.

Bharath, R.; Drosen, J. 1994. Neural Net Computing, McGraw-Hill.

Camuffo, A.; Gerli, F. 2004. An integrated competency-based approach to management education, International Journal of Training & Development, 8(4), 240.

Cheung, B.; Hui, L.; Zhang, J.; Yiu, S .2003. SmartTutor: An intelligent tutoring system in web-based adult education, Journal of Systems & Software; 68(1), 11.

Choi, Y.; Yoo, S., 2001. Text database discovery on the web: Neural net based approach, Journal of Intelligent Information Systems, 16(1), 5.

Clayton, G.; Cate, Y. 2004. Predicting MBA no-shows and graduation success with discriminate analysis, International Advances in Economic Research, 10(3), 235.

Delone, W.; Mclean, E., 2003. The Delone and Mclean model of information systems success: A ten year update. Journal of Management information Systems, 19(4), 9.

Driver, M. 2002. Investigating the benefits of web-centric instruction for study learning: An exploratory study of a BA course. Journal of Education for Business, 77(4), 238.

Dutton, J., Dutton, M. 2005. Characteristics and performance of students in an online section of business statistics. Journal of Statistics Education, 13(3). http://www.amstat.org/publications/jse/v13n3/dutton.html

Ghaoui, C.; Janvier, W. 2003. Interactive E-Learning. International Journal of Distance Education Technologies, 2(3), 26.

Gonzalez, J. 2002. Artificial neural networks: A new approach to predicting application behavior. Research in Higher Education, 43(2), 235.

Grandzol, J. 2004. Teaching MBA Statistics Online: A Pedagogically Sound Process Approach. Journal of Education for Business, 79(4), 237.

Graesser, A. 2005. Auto Tutor: An intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions on Education, 48(4), 612.

Graves, W. 1999. The instructional management system cooperative: converting random acts of progress into global progress. Educom Review, 34(6), 32.

Hancock, T. 1999. The gender difference: Validity of standardized admission tests in predicting MBA performance. Journal of Education for Business, 75(2), 91.

Hardgrave, B.; Wilson, R.; Kent, K. 1994. Predicting graduate student success: A comparison of neural networks and traditions techniques. Computers & Operations Research, 21, 249.

Hedlund, J.; Wilt, J.; Nebel, K.; Ashford, S.; Sternberg, R. 2006. Assessing practical intelligence in business school admissions: A supplement to the graduate management admissions test. Learning & Individual Differences, 16(2), 101.

Hoefer, P.; Gould, J. 2000. Assessment of admission criteria for predicting student’s academic performance in graduate business programs. Journal of Education for Business, 75(4), 225.

Holsapple, C.; Lee-Post, A. 2006. Defining, assessing and promoting e-learning success; an information systems perspective. Decision Sciences Journal of Innovative Education, 4(1), 67.>

Huang, H. 2002. Student perceptions in an online mediated environment, International Journal of Instructional Media, 29(4), 408.

Hwang, A.; Arbaugh, J. 2006. Virtual and traditional feedback-seeking behaviors: Underlying competitive attitudes and consequent grade performance, Decision Sciences of Innovation Education, 4(1), 1.

Johne, M. 2003. Virtual environments, CMA Management, 76 (10), 28.

Johnson, D.G.; John, J.A. 2003. Use of Demonstration and Experiments in Teaching Business Statistics. Journal of Applied Mathematics & Decision Sciences, 7(2), 93.

Kathawala, Y.; Abdou, K.; Elmuti, D. 2002. The global MBA: a comparative assessment for its future. Journal of European Industrial Training, 26(1), 14.

Kezim, B; Pariseau, S; Quinn, F. 2005. Is Grade Inflation Related to Faculty Status? Journal of Education for Business, 80(6), 358.

Kleiman, L. 2007. Giving MBA programs the third degree. Journal of Management Education, 31(1), 81.

Lawrence, J.; Singhania, R. 2004. A study of teaching and testing strategies for a required statistics course for undergraduate business students. Journal of Education for Business, 79(6), 333.

Lundgren, T.; Nantz, K., 2003. Student attitudes towards Internet courses: A longitudinal study. Journal of Computer Information Systems, 43(3), 61.

Mangan, K., 2003. The new MBA: Business school professors shift programs to emphasize relevance and flexibility. The Chronicle of Higher Education, 12.

McAlevey, L. 2001. Evolution in Business Statistics Curricula: Learning from the Making Statistics more Effective in Schools of Business. Journal of the Royal Statistics Society, 50(3), 321.

McDuffie, R.; Smith, S.; Murphy, L. 2006. Impact of an audit reporting expert system on learning performance: a teaching note. Accounting Education, 15(1), 189.

McLaren, C. 2004. A comparison of student persistence and performance in online and classroom business statistics experiences. Decision Sciences Journal of Innovative Education, 2(1), 10.

Mulekar, M. 2000. Internet Resources for AP Statistics Teacher. Journal of Statistics Education, 8(2). http://www.amstat.org/publications/jse/v8n2/mulekar.html

Naik, B. 2004. Using neural networks to predict MBA student success. College Student Journal, 38(1), 143.

Neo, T., Neo, M. 2004. Classroom innovation: engaging students in interactive multimedia learning. Campus – Wide Information Systems, 21(3), 118.

Nguyen, N.; Cripps, A. 2001. Predicting housing value: A comparison of multiple regression analysis and artificial neural networks. The Journal of Real Estate Research, 22(3), 313.

Post, G. 2005. An Expert System Helps Students Learn Database Design. Decision Sciences Journal of Innovative Education, 3(2), 273.

Raffle, H., Brooks, G. 2003. Using Monte Carlo software to teach abstract statistical concepts: A Case study. Teaching Psychology, 32(3), 193.

Richards-Wilson, S. 2006. What every business school needs to know about its master of business administration. Journal of Education for Business., 82(2), 95.

Shih, Y. 2003. A survey of distance education challenges and technologies. International Journal of Distance Learning Technologies, 1(1), 1.

Sireci, S. 2006. Evaluating the predictive validity of graduate management admission test scores, Educational & Psychological Measurement, 65(2), 305.

Su, Y.T.; Liang, C.L. 2000. Using multivariate rank sum tests to evaluate effectiveness of computer applications in teaching business statistics. Journal of Applied Statistics, 27(3), 337.

Suanpang, P.; Petocz, P.; Kalceff, W. 2005. Student attitudes to learning business statistics: Comparison of online and traditional methods. Educational Technology & Society, 7(3), 9.

Wilson, R.; Hardgrave, B. 1995. Predicting graduate student success in an MBA program: Regression versus classification, Educational & Psychological Measurement, 55(2), 186.

Yang, B.; Lu, D. 2001. Predicting academic performance in management education: A empirical investigation of MBA success. Journal of Education for Business, 77(1), 15.


Owen P. Hall, P.E., Ph.D.
Graziadio School of Business and Management
Pepperdine University
6100 Center Drive
Los Angeles, CA 90045
ohall@pepperdine.edu

Ken Ko, Ph.D.
Graziadio School of Business and Management
Pepperdine University
6100 Center Drive
Los Angeles, CA 90045
Kenneth.Ko@pepperdine.edu


Volume 16 (2008) | Archive | Index | Interactive Computing Archive | Data Archive | Resources | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications