首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and classification consistency and accuracy under three item response theory (IRT) frameworks: unidimensional IRT (UIRT), simple structure multidimensional IRT (SS-MIRT), and bifactor multidimensional IRT (BF-MIRT) models. Illustrative examples are presented using data from three mixed-format exams with various levels of format effects. In general, the two MIRT models produced similar results, while the UIRT model resulted in consistently lower estimates of reliability and classification consistency/accuracy indices compared to the MIRT models.  相似文献   

2.
When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA and CC nonparametrically by replacing the role of the parametric IRT model in Lee's classification indices with a modified version of Ramsay's kernel‐smoothed item response functions. The performance of the nonparametric CA and CC indices are tested in simulation studies in various conditions with different generating IRT models, test lengths, and ability distributions. The nonparametric approach to CA often outperforms Lee's method and Livingston and Lewis's method, showing robustness to nonnormality in the simulated ability. The nonparametric CC index performs similarly to Lee's method and outperforms Livingston and Lewis's method when the ability distributions are nonnormal.  相似文献   

3.
The recent development of multilevel IRT models (Fox & Glas, 2001, 2003) has been shown to be very useful for analyzing relationships between observed variables on different levels containing measurement error. Model parameter estimates and their standard deviations are concurrently estimated taking account of measurement error in observed variables. The multilevel IRT models are, in particular, useful in the analysis of school effectiveness research data since hierarchical structured educational data are subject to error. By re-examining some school effectiveness studies, the basic aspects of this new model and consequences of measurement error are shown.  相似文献   

4.
In educational assessment, overall scores obtained by simply averaging a number of domain scores are sometimes reported. However, simply averaging the domain scores ignores the fact that different domains have different score points, that scores from those domains are related, and that at different score points the relationship between overall score and domain score may be different. To report reliable and valid overall scores and domain scores, I investigated the performance of four methods using both real and simulation data: (a) the unidimensional IRT model; (b) the higher-order IRT model, which simultaneously estimates the overall ability and domain abilities; (c) the multidimensional IRT (MIRT) model, which estimates domain abilities and uses the maximum information method to obtain the overall ability; and (d) the bifactor general model. My findings suggest that the MIRT model not only provides reliable domain scores, but also produces reliable overall scores. The overall score from the MIRT maximum information method has the smallest standard error of measurement. In addition, unlike the other models, there is no linear relationship assumed between overall score and domain scores. Recommendations for sizes of correlations between domains and the number of items needed for reporting purposes are provided.  相似文献   

5.
The posterior predictive model checking method is a flexible Bayesian model‐checking tool and has recently been used to assess fit of dichotomous IRT models. This paper extended previous research to polytomous IRT models. A simulation study was conducted to explore the performance of posterior predictive model checking in evaluating different aspects of fit for unidimensional graded response models. A variety of discrepancy measures (test‐level, item‐level, and pair‐wise measures) that reflected different threats to applications of graded IRT models to performance assessments were considered. Results showed that posterior predictive model checking exhibited adequate power in detecting different aspects of misfit for graded IRT models when appropriate discrepancy measures were used. Pair‐wise measures were found more powerful in detecting violations of the unidimensionality and local independence assumptions.  相似文献   

6.
In the present paper we discuss the merits of a general multilevel approach to model analysis of large scale school effectiveness studies.

We briefly review previous research framework which have been used in school effectiveness research and examine critically the assumptions on which they were based and the methodological implications of these assumptions. The general interaction models presented in this paper provide an answer to many of these critiques. Findings from different types of modeling supported the claim that the modeling of school effectiveness studies must be both multilevel and interactive. Achievement is found to be dependent in a very sensitive, non‐additive way, on the particular combination of pupil's home background, his or her general ability, teaching style and other teacher characteristics, the class and the school context in which the pupil learns. Comparisons between parsimonious main effect models with parsimonious interactive ones show that although the explanatory power of interaction models might be slightly smaller, the use of interaction models causes changes in the specification of the models that cannot be ignored.  相似文献   

7.
A Monte Carlo simulation technique for generating dichotomous item scores is presented that implements (a) a psychometric model with different explicit assumptions than traditional parametric item response theory (IRT) models, and (b) item characteristic curves without restrictive assumptions concerning mathematical form. The four-parameter beta compound-binomial (4PBCB) strong true score model (with two-term approximation to the compound binomial) is used to estimate and generate the true score distribution. The nonparametric item-true score step functions are estimated by classical item difficulties conditional on proportion-correct total score. The technique performed very well in replicating inter-item correlations, item statistics (point-biserial correlation coefficients and item proportion-correct difficulties), first four moments of total score distribution, and coefficient alpha of three real data sets consisting of educational achievement test scores. The technique replicated real data (including subsamples of differing proficiency) as well as the three-parameter logistic (3PL) IRT model (and much better than the 1PL model) and is therefore a promising alternative simulation technique. This 4PBCB technique may be particularly useful as a more neutral simulation procedure for comparing methods that use different IRT models.  相似文献   

8.
In educational environments, monitoring persons' progress over time may help teachers to evaluate the effectiveness of their teaching procedures. Electronic learning environments are increasingly being used as part of formal education and resulting datasets can be used to understand and to improve the environment. This study presents longitudinal models based on the item response theory (IRT) for measuring persons' ability within and between study sessions in data from web-based learning environments. Two empirical examples are used to illustrate the presented models. Results show that by incorporating time spent within- and between-study sessions into an IRT model; one is able to track changes in ability of a population of persons or for groups of persons at any time of the learning process.  相似文献   

9.
Multilevel models allow data to be analysed which are hierarchical in nature; in particular, data which have been collected on pupils grouped into schools. Some of the associated variables may be measured at the pupil level, and others at the school level. The use of multilevel models produces estimates of variances between schools and pupils, as well as the effects of background variables in reducing or explaining these variances. One data set which has been analysed relates to the national surveys of mathematics carried out in England, Wales and Northern Ireland. In this case the basic unit of analysis was a pupil's performance in a group of items within one of 12 sub‐categories of maths. Each pupil tackled two such item groups (or sub‐tests) and thus a three‐level model was required, with the levels representing sub‐tests, pupils and schools. A number of background variables at both pupil and school levels were also measured, and interesting results were obtained when a multilevel model was fitted. The program used was a version of one developed by Professor H. Goldstein. A quite different data set related to pupils’ responses to a questionnaire survey about their reactions to their current course of study. The dependent variable was a measure of pupils’ satisfaction with the course derived from their responses, and other pupil level variables were also derived, relating to their school experiences and personal attributes. School level variables such as size and type of school were obtained from a schools data base. The program Hierarchical Linear Model (HLM) was used to model these data, using only two levels. The two multilevel program used have different strengths and capabilities, but are related in terms of the kinds of models that can be fitted. Such models can lead to greater insights into the relationships between school and pupil level variables, and their influence on pupil results or attitudes.  相似文献   

10.
Several studies have indicated that bifactor models fit a broad range of psychometric data better than alternative multidimensional models such as second-order models (e.g., Carnivez, 2016; Gignac, 2016; Rodriguez, Reise, & Haviland, 2016). Murray and Johnson (2013) and Gignac (2016) argued that this phenomenon is partially due to unmodeled complexities (e.g., unmodeled cross-factor loadings) that induce a bias in standard statistical measures that favors bifactor models over second-order models. We extend the Murray and Johnson simulation studies to show how the ability to distinguish second-order and bifactor models diminishes as the amount of unmodeled complexity increases. By using theorems about rank constraints on the covariance matrix to find submodels of measurement models that have less unmodeled complexity, we are able to reduce the statistical bias in favor of bifactor models; this allows researchers to reliably distinguish between bifactor and second-order models.  相似文献   

11.
Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be compared. When applied to a binary data set, our experience suggests that IRT and FA models yield similar fits. However, when the data are polytomous ordinal, IRT models yield a better fit because they involve a higher number of parameters. But when fit is assessed using the root mean square error of approximation (RMSEA), similar fits are obtained again. We explain why. These test statistics have little power to distinguish between FA and IRT models; they are unable to detect that linear FA is misspecified when applied to ordinal data generated under an IRT model.  相似文献   

12.
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel model, CCrem, CCMMrem). Results indicate that ignoring rater bias can lead to teachers being misclassified within an evaluation system. The best estimates of teacher effectiveness are produced using CCrems regardless of scaling method. Use of CCMMrems to model rater bias cannot be recommended based on the results of this study; combining the use of CCMMrems with an IRT scaling method produced especially unstable results.  相似文献   

13.
Appropriate model specification is fundamental to unbiased parameter estimates and accurate model interpretations in structural equation modeling. Thus detecting potential model misspecification has drawn the attention of many researchers. This simulation study evaluates the efficacy of the Bayesian approach (the posterior predictive checking, or PPC procedure) under multilevel bifactor model misspecification (i.e., ignoring a specific factor at the within level). The impact of model misspecification on structural coefficients was also examined in terms of bias and power. Results showed that the PPC procedure performed better in detecting multilevel bifactor model misspecification, when the misspecification became more severe and sample size was larger. Structural coefficients were increasingly negatively biased at the within level, as model misspecification became more severe. Model misspecification at the within level affected the between-level structural coefficient estimates more when data dependency was lower and the number of clusters was smaller. Implications for researchers are discussed.  相似文献   

14.
Given the relationships of item response theory (IRT) models to confirmatory factor analysis (CFA) models, IRT model misspecifications might be detectable through model fit indexes commonly used in categorical CFA. The purpose of this study is to investigate the sensitivity of weighted least squares with adjusted means and variance (WLSMV)-based root mean square error of approximation, comparative fit index, and Tucker–Lewis Index model fit indexes to IRT models that are misspecified due to local dependence (LD). It was found that WLSMV-based fit indexes have some functional relationships to parameter estimate bias in 2-parameter logistic models caused by violations of LD. Continued exploration into these functional relationships and development of LD-detection methods based on such relationships could hold much promise for providing IRT practitioners with global information on violations of local independence.  相似文献   

15.
A polytomous item is one for which the responses are scored according to three or more categories. Given the increasing use of polytomous items in assessment practices, item response theory (IRT) models specialized for polytomous items are becoming increasingly common. The purpose of this ITEMS module is to provide an accessible overview of polytomous IRT models. The module presents commonly encountered polytomous IRT models, describes their properties, and contrasts their defining principles and assumptions. After completing this module, the reader should have a sound understating of what a polytomous IRT model is, the manner in which the equations of the models are generated from the model's underlying step functions, how widely used polytomous IRT models differ with respect to their definitional properties, and how to interpret the parameters of polytomous IRT models.  相似文献   

16.
Mokken scale analysis (MSA) is a probabilistic‐nonparametric approach to item response theory (IRT) that can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. This instructional module provides an introduction to MSA as a probabilistic‐nonparametric framework in which to explore measurement quality, with an emphasis on its application in the context of educational assessment. The module describes both dichotomous and polytomous formulations of the MSA model. Examples of the application of MSA to educational assessment are provided using data from a multiple‐choice physical science assessment and a rater‐mediated writing assessment.  相似文献   

17.
A practical concern for many existing tests is that subscore test lengths are too short to provide reliable and meaningful measurement. A possible method of improving the subscale reliability and validity would be to make use of collateral information provided by items from other subscales of the same test. To this end, the purpose of this article is to compare two different formulations of an alternative Item Response Theory (IRT) model developed to parameterize unidimensional projections of multidimensional test items: Analytical and Empirical formulations. Two real data applications are provided to illustrate how the projection IRT model can be used in practice, as well as to further examine how ability estimates from the projection IRT model compare to external examinee measures. The results suggest that collateral information extracted by a projection IRT model can be used to improve reliability and validity of subscale scores, which in turn can be used to provide diagnostic information about strength and weaknesses of examinees helping stakeholders to link instruction or curriculum to assessment results.  相似文献   

18.
In this digital ITEMS module, Dr. Brian Leventhal and Dr. Allison Ames provide an overview of Monte Carlo simulation studies (MCSS) in item response theory (IRT). MCSS are utilized for a variety of reasons, one of the most compelling being that they can be used when analytic solutions are impractical or nonexistent because they allow researchers to specify and manipulate an array of parameter values and experimental conditions (e.g., sample size, test length, and test characteristics). Dr. Leventhal and Dr. Ames review the conceptual foundation of MCSS in IRT and walk through the processes of simulating total scores as well as item responses using the two-parameter logistic, graded response, and bifactor models. They provide guidance for how to implement MCSS using other item response models and best practices for efficient syntax and executing an MCSS. The digital module contains sample SAS code, diagnostic quiz questions, activities, curated resources, and a glossary.  相似文献   

19.
This article presents findings of an attempt to test Creemers' model of educational effectiveness by using data derived from an evaluation study in Mathematics in which 30 schools, 56 classes and 1,051 pupils of the last year of primary school of Cyprus participated. More specifically, we examine whether the pupil, classroom and school variables show the expected effects on pupils' achievement in Mathematics. Research data concerned with pupils' achievement in Mathematics were collected by using two different forms of assessment (external assessment and teacher's assessment). Questionnaires were administered to pupils and teachers in order to collect data about most of the variables included in Creemers' model. The findings support the main assumptions of the model. The influences on pupil achievement are multilevel and the net effect of classrooms was higher than that of schools. Implications for the development of research on school effectiveness are drawn.  相似文献   

20.
Functional form misfit is frequently a concern in item response theory (IRT), although the practical implications of misfit are often difficult to evaluate. In this article, we illustrate how seemingly negligible amounts of functional form misfit, when systematic, can be associated with significant distortions of the score metric in vertical scaling contexts. Our analysis uses two‐ and three‐parameter versions of Samejima's logistic positive exponent model (LPE) as a data generating model. Consistent with prior work, we find LPEs generally provide a better comparative fit to real item response data than traditional IRT models (2PL, 3PL). Further, our simulation results illustrate how 2PL‐ or 3PL‐based vertical scaling in the presence of LPE‐induced misspecification leads to an artificial growth deceleration across grades, consistent with that commonly seen in vertical scaling studies. The results raise further concerns about the use of standard IRT models in measuring growth, even apart from the frequently cited concerns of construct shift/multidimensionality across grades.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号