
|
Vincent Aleven
Professor of Human-Computer Interaction
Carnegie Mellon University
This workshop is a companion to the keynote. In this workshop, I present a case study of how the Cognitive Tutor Authoring Tools were used to create a tutoring system for middle-school mathematics, and then improve it using insights from student log data. In particular, I illustrate how analysis of log data might help improve the tutor’s mastery learning mechanism, so individual learners get just the right amount of practice. These analyses are facilitated by the fact that CTAT is part of a larger infrastructure for R&D in adaptive learning technologies, such as intelligent tutoring systems or educational games. This infrastructure is geared not just towards easy authoring of learning technologies, but also supports data-driven iterative improvement of such technologies. Besides CTAT, this infrastructure includes Tutorshop, a learning management system for CTAT-built tutoring systems, and DataShop, a large open repository of log data from intelligent tutoring systems and other educational technology. The process for data-driven improvement illustrated in the workshop is a central tenet to the new “learning engineering” discipline that is emerging. |

|
Roger Azevedo
Professor
Department of Learning Sciences and Educational Research
University of Central Florida
Multimodal trace data collected during students’ real-time interactions with advanced learning technologies (ALTs) such as intelligent tutoring systems, simulations, hypermedia, multimedia, serious games, collaborative systems, and immersive virtual learning environments is transforming our understanding of self-regulated learning (SRL) and the design of future learning technologies. This workshop focuses on the measurement, detection, modeling, analyses, inferences, and understanding of complex cognitive, affective, metacognitive, and motivational self-regulatory processes by presenting and discussing different multimodal trace data of SRL during learning, reasoning, and problem solving across different tasks and ALTs and answers the following questions: (1) what are and how do multimodal trace data (from log files, eye tracking, human-machine interactions, facial expressions of emotions, physiological sensors, discourse between multiple human and artificial agents, verbal protocols, screen recording of human-machine interactions) reveal about the nature of the underlying cognitive, affective, metacognitive, and motivational self-regulatory processes across tasks, domains, contexts, and ALTs?; (2) what are the challenges posed by (1) for current conceptual, theoretical, methodological, and analytical SRL issues; and (3) how can we use (1) and (2) to design future ALTs capable of supporting and fostering students’ SRL across contexts and ALTs by providing individualized, intelligent scaffolding and feedback?
|