Tuesday 12pm, 28 November 2017
Improving Mobile MOOC Learning via Implicit Physiological Signal Sensing
PhD - University of Pittsburgh
Massive Open Online Courses (MOOCs) are becoming a promising solution for delivering high-quality education on a large scale at low cost. Despite their great potential, today’s MOOCs also suffer from challenges such as low student engagement, lack of personalization, and most importantly, lack of direct, immediate feedback channels from students to instructors. My research explores the use of physiological signals implicitly collected via a "sensorless" approach as a rich feedback channel to understand, model, and improve learning in mobile MOOC contexts.
I first demonstrate AttentiveLearner, a mobile MOOC system which captures learners' physiological signals implicitly during learning on unmodified mobile phones. AttentiveLearner uses on-lens finger gestures for video control and monitors learners’ photoplethysmography (PPG) signals based on the fingertip transparency change captured by the back camera. Through a series of usability studies and follow-up analyses, I show that the tangible video control interface of AttentiveLearner is intuitive to use and easy to operate, and the PPG signals implicitly captured by AttentiveLearner can be used to infer both learners’ cognitive states (boredom and confusion levels) and divided attention (multitasking and external auditory distractions).
Building on top of AttentiveLearner, I design, implement, and evaluate a novel intervention technology, Context and Cognitive State triggered Feed-Forward (C2F2), which infers and responds to learners’ boredom and disengagement events in real time via a combination of PPG-based cognitive state inference and learning topic importance monitoring. C2F2 proactively reminds a student of important upcoming content (feed-forward interventions) when disengagement is detected. A 48-participant user study shows that C2F2 on average improves learning gains by 20.2% compared with a non-interactive baseline system and is especially effective for bottom performers.
Xiang Xiao is a recently graduated Ph.D. in Computer Science from the University of Pittsburgh under the supervision of Dr. Jingtao Wang. His research interest is Human Computer Interaction, especially Mobile Interfaces, Intelligent Interactive Systems, and Learning Technologies. His PhD research mainly focused on using the physiological signals implicitly collected via the built-in camera of mobile phones to understand, model, and improve learning in mobile Massive Open Online Courses (MOOCs). He recently joined the Google Accessibility team, working on accessibility for blind and motion-impaired users on mobile devices.