首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Multimodal data as a means to understand the learning experience
Institution:1. Department of Computer and Information Science, Norwegian University of Science and, Technology (NTNU), Sem Sælands vei 7-9, 7491, Trondheim, Norway;2. School of Computing & Information Systems, University of Melbourne, Room 9.03, Doug McDonell Building 168, VIC-3010 Parkville, Australia;3. University of Agder (UiA), Kristiansand, Norway
Abstract:Most work in the design of learning technology uses click-streams as their primary data source for modelling & predicting learning behaviour. In this paper we set out to quantify what, if any, advantages do physiological sensing techniques provide for the design of learning technologies. We conducted a lab study with 251 game sessions and 17 users focusing on skill development (i.e., user's ability to master complex tasks). We collected click-stream data, as well as eye-tracking, electroencephalography (EEG), video, and wristband data during the experiment. Our analysis shows that traditional click-stream models achieve 39% error rate in predicting learning performance (and 18% when we perform feature selection), while for fused multimodal the error drops up to 6%. Our work highlights the limitations of standalone click-stream models, and quantifies the expected benefits of using a variety of multimodal data coming from physiological sensing. Our findings help shape the future of learning technology research by pointing out the substantial benefits of physiological sensing.
Keywords:Human learning  Multimodal learning analytics  User-generated data  Skill acquisition  Multimodal data  Machine learning
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号