Exclusive Interview with Dr. Thomas C. Reeves, Author of “Interactive Learning Systems Evaluation”

September 30, 2003

Dr. Saba: What prompted you to write a book about evaluation?

Dr. Reeves: While I was in the Ph.D. program in the Area of Instructional Technology at Syracuse University back in the 1970s, I was very fortunate to study evaluation with Dr. Edward F. Kelly, who was the Director of Evaluation for the Center for Instructional Development (CID) at that time. As my mentor and dissertation advisor, Dr. Kelly helped me develop a passion for evaluation, a topic that still seems to turn most people off. He was a wonderful teacher and role model, and I owe him far more than I can ever express.

After completing my Ph.D. in 1979, I undertook several consultancies and short term jobs as an evaluator at places such as the New York State of Mental Health and the University of Maryland University College before joining the faculty at The University of Georgia (UGA) in 1982. During those years, I started collecting “war stories” from the evaluation “trenches,” and later I began to share those stories at evaluation workshops at various professional conferences.

In the early 1990s, I was very fortunate to start doing some work in Australia where I reestablished contact with Professor John Hedberg, with whom I had gone to graduate school at Syracuse University. John had established an excellent reputation as a multimedia designer and evaluator “down under” and elsewhere. Soon, we began to offer workshops together, and in the process, developed quite an elaborate set of evaluation tools, templates, guidelines, and so forth. At the same time, both John and I were teaching graduate courses in evaluation at our respective institutions, the University of Wollongong and UGA. We shared our frustration that there was no book available that adequately captured the nature of the evaluation process within the context of instructional design and technology. Meanwhile, our students, workshop attendees, and others were telling us, “Why don’t you write an evaluation book?”

We actually began writing early drafts of the book in 1993, ten years before it was published, but we didn’t get heavily engaged in it until about four years ago. As we wrote the book, we tested iterations with classes of graduate students in Australia and the USA, and over time, we hope that we have honed it into a worthwhile volume. The testimonials we have received from reviewers such as Alison Rossett from San Diego State, Stanley Varnhagen from the University of Alberta, Carmel McNaught from the Chinese University of Hong Kong, Ron Oliver from Edith Cowan University, Joe Henderson from Dartmouth Medical School, and Curt Bonk from Indiana University have been very encouraging.

Dr. Saba: You collaborated with Dr. Hedberg in writing this book. Did this collaboration take place primarily online?

Dr. Reeves: In the early days, we wrote pieces separately, and edited them together whenever I would visit Australia or John would visit the States. I have made about ten trips “down under” since 1990, and John comes to North America at least once a year. But during the last two years of writing, we collaborated online for the most part, sending chapter versions back and forth. This introduced some interesting challenges, involving the differences in document formats (A4 versus US Letter) and spelling checkers (“organise” versus “organize”). We wrote everything on Apple Macintosh computers, wonderful machines for this type of collaborative work.

We had so much invaluable help along the way. Dr. Jan Herrington, formerly at Edith Cowan University, and now at the University of Wollongong, helped a tremendous amount with formatting as well as content. My wonderful wife, Dr. Trisha Reeves, carried the lion’s share of the proofreading. She has a keen eye for detail like no other. Additional colleagues who helped included Bill Aggen, Shirley Alexander, Christine Brown, Kent Gustafson, Barry Harper, Joe Henderson, Jim King, Geraldine Lefoe, Mary Marlino, Carmel McNaught, Mary L. Miller, Mary R. Miller, Jim Okey, Ron Oliver, Geoff Ring, Murray Tillman, and Stanley Varnhagen. I am sure I am forgetting several important contributors. Of course, we were helped enormously by our graduate students and workshop attendees who compelled us to write this book in the first place and who provided us with invaluable feedback along the way.

Dr. Saba: who is the primary audience for the book?

Dr. Reeves: John and I share a belief that evaluation activities are critical to the effective development of interactive learning systems such as e-learning and other online learning approaches. We believe that evaluation is often overlooked or shortchanged in the haste to generate interactive products and programs and deliver them on time. The few evaluations that are done are rarely reported in time to influence critical decisions about design, production, and implementation issues. We think that evaluation should guide the creative development process by providing timely and insightful feedback about e-learning designs and the quality of their implementation.

The book is structured around a model of six facets of evaluation linked to specific stages in the design and development of interactive educational products such as multimedia DVD’s, Web-based training, electronic performance support systems, and e-learning solutions. We have sought to link specific design activities to evaluation procedures and tools that will help the novice (as well as experienced) evaluator plan, conduct, and report better evaluations. Many of the tools can be used for multiple functions. A Web site associated with this book (http://www.evaluateitnow.com) provides downloadable tools and other links related to evaluation.

The book is already being used as a textbook in evaluation courses in the USA, Canada, Australia, and elsewhere. We have had offers to translate it into languages other than English, a task that we hope to pursue soon. We think that people in business and industrial training will find much of value in this book as well as anyone doing instructional design and development in academe. Distance or flexible learning managers should find useful guidance for contracting external evaluators in this book. We don’t have any notions of making any serious profit from this book, but we sincerely (and humbly) hope to improve the quality of evaluations wherever online learning is developed or used.

Dr. Saba: What are some of the highlights of the book? What will the reader learn from the book?

Dr. Reeves: The first chapter of the book introduces our rationale for evaluation…to inform decision-making at all stages of instructional product development.

Chapter Two clarifies the evaluation process by describing its theoretical roots and providing an overview of its historical development.

The third chapter links the roles that evaluation can play with each stage of the interactive product development process.

Chapter Four is all about planning and managing evaluations.

Chapter Five — review — describes the first stage in our six stage model. Review helps refine the rationale for why an interactive product should be produced in the first place.

Chapter Six describes needs assessment, a process that helps clarify the project objectives and design parameters that guide the instructional development process.

Chapter Seven focuses on formative evaluation — evaluation intended to enhance a product as it is being developed. We pay a lot of attention to usability or the cognitive demands of the product interface.

Chapter Eight deals with effectiveness evaluation, including strategies for evaluating the interactive learning system in action to determine how it is working in the intended context.

Chapter Nine is all about impact evaluation. This function examines the integration of an interactive learning system within an organization’s structure and reveals how a product supports the organization’s goals.

Chapter Ten covers maintenance evaluation, a neglected area which should be a critical part of any product’s renewal cycle.

Chapter Eleven focuses on several issues about how evaluation studies should be reported to clients and other stakeholders.

Finally, Chapter Twelve seeks to explore how evaluators might contribute to the overall advancement of interactive learning systems design in substantial ways beyond the immediate context of any given project.

Each chapter includes a comprehensive list of references, and there are many tools and links to external resources embedded in the pages.

Dr. Saba: Are there major differences in evaluating online learners and those who are present in a classroom?

Dr. Reeves: Evaluating online learning has both advantages and disadvantages over evaluating traditional classroom instruction. But first, let me clarify how we use the terms assessment and evaluation in this book. The terms evaluation and assessment are often used synonymously, but this leads to a great deal of confusion. Therefore, we use these terms to mean two very different things. Both evaluation and assessment involve the collection of information to make decisions. However, evaluation is focused on things, e.g., programs, products, and projects. Assessment is focused on people, e.g., their aptitudes, attitudes, or achievement. In short, we assess people (e.g., their attitudes, aptitudes, achievements, etc.), and we evaluate programs and products (e.g., their effectiveness, impact, etc.). Our book is primarily about evaluation, but assessment is an activity often used within an evaluation.

Within the context of online learning, a great deal of the data collection required for both assessment and evaluation can be automated. You can record learner paths through an interactive program, their choices among any options presented, their quiz and test scores, etc. Most learning management systems (LMS) include audit trail functions, and you can even make your online data collection proactive by displaying questions on the screen that inquire about the learner’s reactions to various aspects of the online learning program at pre-specified intervals or points.

That said, the analysis of audit trail data within complex interactive programs is challenging, and may sometimes require more inference than in classroom observations. When learners can go wherever they want in any sequence, detecting interpretable paths without any direct interpretation from the learners is highly inferential. This problem is even more complex when the World Wide Web is used for the delivery of interactive learning. It is technically quite easy to track wherever a user goes on the Web. However, the interpretation of such data involves a great deal of subjectivity. If data collection is not carefully planned and managed, the evaluator can drown in data from which it is difficult to extract meaningful information to guide decision-making.

Dr. Saba: Who is the publisher and where can we get more information about this book?

Dr. Reeves: The publisher is Larry Lipsitz and his associates at Educational Technology Publications, Inc. John and I appreciate Larry’s investment in making our book available. You can find out more about ordering the book at their Website at: http://bookstoread.com/etp /. A pdf flyer about the book can be downloaded at: http://bookstoread.com/etp/interactive.pdf, and the official support site for the book is: http://www.evaluateitnow.com.

Dr.Thomas C. Reeves is professor of instructional technology at The University of Georgia where he teaches program evaluation, multimedia design, and research courses. Since receiving his Ph.D. at Syracuse University in 1979, he has developed and evaluated numerous interactive multimedia programs for both education and training. In addition to numerous presentations and workshops in the USA, he has been an invited speaker in other countries including Australia, Brazil, Bulgaria, Canada, China, England, Finland, Malaysia, New Zealand, Peru, Portugal, Russia, Singapore, South Africa, Sweden, Switzerland, and Taiwan. He is a past president of the Association for the Development of Computer-based Instructional Systems (ADCIS) and a former Fulbright Lecturer. In 1995, he was selected as one of the “Top 100” people in multimedia by Multimedia Producer magazine, and from 1997 – 2000, he was the editor of the Journal of Interactive Learning Research. In 2003, he was the first person to receive the AACE Fellowship Award from the Association for the Advancement of Computing in Education.