首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Automated Scoring of Constructed‐Response Science Items: Prospects and Obstacles
Authors:Ou Lydia Liu  Chris Brew  John Blackmore  Libby Gerard  Jacquie Madhok  Marcia C Linn
Institution:1. Educational Testing Service;2. Nuance;3. University of California, Berkeley
Abstract:Content‐based automated scoring has been applied in a variety of science domains. However, many prior applications involved simplified scoring rubrics without considering rubrics representing multiple levels of understanding. This study tested a concept‐based scoring tool for content‐based scoring, c‐rater?, for four science items with rubrics aiming to differentiate among multiple levels of understanding. The items showed moderate to good agreement with human scores. The findings suggest that automated scoring has the potential to score constructed‐response items with complex scoring rubrics, but in its current design cannot replace human raters. This article discusses sources of disagreement and factors that could potentially improve the accuracy of concept‐based automated scoring.
Keywords:automated scoring  constructed‐response items  c‐rater™    science assessment
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号