What is “Optimal Learning”?

Optimal Efficiency Threshold

Optimal learning is a scientific method to arrive at the best adaptive instructional systems in a computer learning system that adjusts to student responses. Since developing this methodology and theory, in 2004 with his advisor John Anderson at Carnegie Mellon University, I and the Optimal Learning Lab been hard at work trying to generalize optimal learning methods across multiple content areas such as facts about anatomy and physiology, mathematics, foreign language learning, and categorizing objects, people, and ideas. Conveniently, the methods are easy to describe because there are three steps.

 

Step one, create a student model. Suppose a college student is learning anatomy (an actual subject area that we are studying in the lab).  Since optimal learning methods examine all the possible practice choices for the student and end up choosing one, we begin by looking at the student’s correct and incorrect answers to questions about anatomy that the student would have been expected to learn from a textbook or course.  This is used to create a statistical model of the effect of prior experience and practice. We use this statistical model during future practice to compute the expected probability that the student will answer correctly for the items being learned and use the results (updated on every practice item) to make pedagogical choices based on this measure of current levels of learning.

 

Not only are we trying to optimize learning for the student, but we are also, at the same time, using the information to optimize the learning system itself. We also use this model before the students ever see the system to compute the best probability (in step 2) at which to give items and build that into the system. The student model is highly detailed, as detailed as a prior dataset can support without “overfitting.” An analogy to overfitting in everyday life might be buying an outfit that looks great in the mirror, as long as one doesn’t move around, bend over, or gain any weight.  It may fit perfectly from one angle but cannot flexibly adapt to future circumstances.  A statistical model that is too tightly fitted to the current data might not fit well with future data with somewhat different properties.  Thus, guarding against overfitting is an important aspect of this kind of research.

 

In step two, we analyze the model. After we have this model, we use methods adapted from microeconomics to extract the best difficulty of practice by balancing costs and gains. In the early work with the method, the model balanced the benefits of distribution of practice (which implies you want to select items with a low probability that you haven’t seen lately to avoid massed practice where repetitions are close to each other in time) and high probability, which implies that practice will be quicker (and thus less costly in terms of time). More recently, in work with Meng Cao, we have generalized the method to predict that learning is an inverted-U shaped function of the difficulty of practice: some middle range of difficulty (not too hard and not too easy, sometimes referred to as the Goldilocks principle) is predicted to be optimal for learning. Simply put, learning is better and more efficient when lower-difficulty items are chosen because of the fast pace at which students can proceed through them with fewer errors. This analysis calculates an estimated optimal efficiency threshold (OET).

 

Step 3, the final step, is to incorporate this model and the optimal efficiency threshold (OET) into an adaptive system as the control module for presenting events and lessons for learning. This control module uses the model and each student’s actual history of performance – while the student is learning – to make real-time inferences about their current knowledge with all the content in the adaptive system. These real-time estimates of how much the student learned, their current probabilities of recall, are compared to the OET to inform the algorithm that optimizes the delivery of the next item for practice (that is, the item closest to the OET at that time). This process repeats, choosing items below and above the OET by favoring those nearest to maximize learning. As old items become well learned, never-seen items are chosen because the model includes an estimate for the initial difficulty of each item.

 

While this process is not perfect, it reflects a highly principled and general approach that may be expected to be accurate to the extent the model itself is accurate and analyzed correctly. Recent work is building upon this foundation to improve the accuracy of the model and methods for analyzing what is optimal and thus improve the learning for students. A recent experiment clearly shows the existence of the OET and matched the model predictions well. This research continues to help us make progress in creating practice in ever more optimized adaptive systems, helping students learn content more efficiently and robustly – fulfilling the mission of the Optimized Learning Lab.

 

ppavlik

Philip I. Pavlik, Jr. received a B.A. degree in economics from the University of Michigan, Ann Arbor, MI, USA, in 1992, and a Ph.D. degree in cognitive psychology from Carnegie Mellon University (CMU), Pittsburgh, PA, USA, in 2005.,From 2006 to 2011, he was a Postdoctoral Fellow and Systems Scientist at the CMU's Human-Computer Interaction Institute and a Member of the Pittsburgh Science of Learning Center, Pittsburgh. Since 2011, he has been with the University of Memphis, Memphis, TN, USA, where he was first an Assistant Professor with the Institute for Intelligent Systems and Department of Psychology before being promoted to Associate Professor in 2017. He has published more than 50 articles. His research interests include memory, learning, practice, forgetting, spacing effects, adaptive personalization, mathematical models of learning, and classroom applications of learning technology., Dr. Pavlik has been an Associate Editor for the IEEE Transactions on Learning Technology. He is a Member of the International Society of Educational Data Mining, the International Society of Artificial Intelligence in Education, and the Psychonomic Society.

Leave a Reply

Your email address will not be published. Required fields are marked *