首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study investigated the Type I error rate and power of four copying indices, K-index (Holland, 1996), Scrutiny! (Assessment Systems Corporation, 1993), g2 (Frary, Tideman, & Watts, 1977), and ω (Wollack, 1997) using real test data from 20,000 examinees over a 2-year period. The data were divided into three different test lengths (20, 40, and 80 items) and nine different sample sizes (ranging from 50 to 20,000). Four different amounts of answer copying were simulated (10%, 20%, 30%, and 40% of the items) within each condition. The ω index demonstrated the best Type I error control and power in all conditions and at all α levels. Scrutiny! and the K-index were uniformly conservative, and both had poor power to detect true copiers at the small α levels typically used in answer copying detection, whereas g2 was generally too liberal, particularly at small α levels. Some comments on the proper uses of copying indices are provided.  相似文献   

2.
Two new indices to detect answer copying on a multiple-choice test—S1 and S2—were proposed. The S1 index is similar to the K index (Holland, 1996) and the K2 index (Sotaridona & Meijer, 2002) but the distribution of the number of matching incorrect answers of the source and the copier is modeled by the Poisson distribution instead of the binomial distribution to improve the detection rate of K and K2. The S2 index was proposed to overcome a limitation of the K and K2 index, namely, their insensitiveness to correct answers copying. The S2 index incorporates the matching correct answers in addition to the matching incorrect answers. A simulation study was conducted to investigate the usefulness of S1 and S2 for 40- and 80-item tests, 100 and 500 sample sizes, and 10%, 20%, 30%, and 40% answer copying. The Type I errors and detection rates of S1 and S2 were compared with those of the K2 and the ω copying index (Wollack, 1997). Results showed that all four indices were able to maintain their Type I errors, with S1 and K2 being slightly conservative compared to S2 and ω. Furthermore, S1 had higher detection rates than K2. The S2 index showed a significant improvement in detection rate compared to K and K2.  相似文献   

3.
甄别答案抄袭的K指数方法   总被引:1,自引:0,他引:1  
陆敏 《考试研究》2009,(1):57-69
目前,除考场监考等考务手段外,考试安全机构还通过对考生作答的异常一致性进行试后甄别,从而判定考生是否抄袭。判定考生抄袭有多种统计方法,K指数是美国ETS在多项选择题考试中使用的判定方法之一。本文介绍了K指数的原理、方法,对使用K指数的结果进行了讨论,并认为K指数保守地估计了对考生由于偶然因素导致作答一致的概率。  相似文献   

4.
This article used the Wald test to evaluate the item‐level fit of a saturated cognitive diagnosis model (CDM) relative to the fits of the reduced models it subsumes. A simulation study was carried out to examine the Type I error and power of the Wald test in the context of the G‐DINA model. Results show that when the sample size is small and a larger number of attributes are required, the Type I error rate of the Wald test for the DINA and DINO models can be higher than the nominal significance levels, while the Type I error rate of the A‐CDM is closer to the nominal significance levels. However, with larger sample sizes, the Type I error rates for the three models are closer to the nominal significance levels. In addition, the Wald test has excellent statistical power to detect when the true underlying model is none of the reduced models examined even for relatively small sample sizes. The performance of the Wald test was also examined with real data. With an increasing number of CDMs from which to choose, this article provides an important contribution toward advancing the use of CDMs in practical educational settings.  相似文献   

5.
6.
When the assumption of multivariate normality is violated and the sample sizes are relatively small, existing test statistics such as the likelihood ratio statistic and Satorra–Bentler’s rescaled and adjusted statistics often fail to provide reliable assessment of overall model fit. This article proposes four new corrected statistics, aiming for better model evaluation with nonnormally distributed data at small sample sizes. A Monte Carlo study is conducted to compare the performances of the four corrected statistics against those of existing statistics regarding Type I error rate. Results show that the performances of the four new statistics are relatively stable compared with those of existing statistics. In particular, Type I error rates of a new statistic are close to the nominal level across all sample sizes under a condition of asymptotic robustness. Other new statistics also exhibit improved Type I error control, especially with nonnormally distributed data at small sample sizes.  相似文献   

7.
The purpose of this study was to examine the performance of differential item functioning (DIF) assessment in the presence of a multilevel structure that often underlies data from large-scale testing programs. Analyses were conducted using logistic regression (LR), a popular, flexible, and effective tool for DIF detection. Data were simulated using a hierarchical framework, such as might be seen when examinees are clustered in schools, for example. Both standard and hierarchical LR (accounting for multilevel data) approaches to DIF detection were employed. Results highlight the differences in DIF detection rates when the analytic strategy matches the data structure. Specifically, when the grouping variable was within clusters, LR and HLR performed similarly in terms of Type I error control and power. However, when the grouping variable was between clusters, LR failed to maintain the nominal Type I error rate of .05. HLR was able to maintain this rate. However, power for HLR tended to be low under many conditions in the between cluster variable case.  相似文献   

8.
SIBTEST is a differential item functioning (DIF) detection method that is accurate and effective with small samples, in the presence of group mean differences, and for assessment of both uniform and nonuniform DIF. The presence of multilevel data with DIF detection has received increased attention. Ignoring such structure can inflate Type I error. This simulation study examines the performance of newly developed multilevel adaptations of SIBTEST in the presence of multilevel data. Data were simulated in a multilevel framework and both uniform and nonuniform DIF were assessed. Study results demonstrated that naïve SIBTEST and Crossing SIBTEST, ignoring the multilevel data structure, yield inflated Type I error rates, while certain multilevel extensions provided better error and accuracy control.  相似文献   

9.
The standardized log-likelihood of a response vector (lz) is a popular IRT-based person-fit test statistic for identifying model-misfitting response patterns. Traditional use of lz is overly conservative in detecting aberrance due to its incorrect assumption regarding its theoretical null distribution. This study proposes a method for improving the accuracy of person-fit analysis using lz which takes into account test unreliability when estimating the ability and constructs the distribution for each lz through resampling methods. The Type I error and power (or detection rate) of the proposed method were examined at different test lengths, ability levels, and nominal α levels along with other methods, and power to detect three types of aberrance—cheating, lack of motivation, and speeding—was considered. Results indicate that the proposed method is a viable and promising approach. It has Type I error rates close to the nominal value for most ability levels and reasonably good power.  相似文献   

10.
In this study, we investigate the logistic regression (LR), Mantel-Haenszel (MH), and Breslow-Day (BD) procedures for the simultaneous detection of both uniform and nonuniform differential item functioning (DIF). A simulation study was used to assess and compare the Type I error rate and power of a combined decision rule (CDR), which assesses DIF using a combination of the decisions made with BD and MH to those of LR. The results revealed that while the Type I error rate of CDR was consistently below the nominal alpha level, the Type I error rate of LR was high for the conditions having unequal ability distributions. In addition, the power of CDR was consistently higher than that of LR across all forms of DIF.  相似文献   

11.
孔祥 《考试研究》2013,(6):54-58,53
本文通过对Kappa作弊甄别方法中顺序量表的4%4矩阵算法进行优化,研究矩阵维度的变化对Kappa方法甄别率及I型错误率的影响,并提出用于比较甄别性能的KX系数及基于矩阵维度选择算法的Kappa-X作弊甄别方法。  相似文献   

12.
The purpose of this study was to investigate the power and Type I error rate of the likelihood ratio goodness-of-fit (LR) statistic in detecting differential item functioning (DIF) under Samejima's (1969, 1972) graded response model. A multiple-replication Monte Carlo study was utilized in which DIF was modeled in simulated data sets which were then calibrated with MULTILOG (Thissen, 1991) using hierarchically nested item response models. In addition, the power and Type I error rate of the Mantel (1963) approach for detecting DIF in ordered response categories were investigated using the same simulated data, for comparative purposes. The power of both the Mantel and LR procedures was affected by sample size, as expected. The LR procedure lacked the power to consistently detect DIF when it existed in reference/focal groups with sample sizes as small as 500/500. The Mantel procedure maintained control of its Type I error rate and was more powerful than the LR procedure when the comparison group ability distributions were identical and there was a constant DIF pattern. On the other hand, the Mantel procedure lost control of its Type I error rate, whereas the LR procedure did not, when the comparison groups differed in mean ability; and the LR procedure demonstrated a profound power advantage over the Mantel procedure under conditions of balanced DIF in which the comparison group ability distributions were identical. The choice and subsequent use of any procedure requires a thorough understanding of the power and Type I error rates of the procedure under varying conditions of DIF pattern, comparison group ability distributions.–or as a surrogate, observed score distributions–and item characteristics.  相似文献   

13.
When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate (α). Despite the fact that many significance tests are often conducted in SEM, rarely is multiplicity control applied. Cribbie (2000, 2007) demonstrated that without some form of adjustment, the familywise Type I error rate can become severely inflated. Cribbie also confirmed that the popular Bonferroni method was overly conservative due to the correlations among the parameters in the model. The purpose of this study was to compare the Type I error rates and per-parameter power of traditional multiplicity strategies with those of adjusted Bonferroni procedures that incorporate not only the number of tests in a family, but also the degree of correlation between parameters. The adjusted Bonferroni procedures were found to produce per-parameter power rates higher than the original Bonferroni procedure without inflating the familywise error rate.  相似文献   

14.
《教育实用测度》2013,26(4):265-288
Many of the currently available statistical indexes to detect answer copying lack sufficient power at small α levels or when the amount of copying is relatively small. Furthermore, there is no one index that is uniformly best. Depending on the type or amount of copying, certain indexes are better than others. The purpose of this article was to explore the utility of simultaneously using multiple copying indexes to detect different types and amounts of answer copying. This study compared eight copying indexes: S1 and S2 (Sotaridona & Meijer, 2003), 2 (Sotaridona & Meijer, 2002), ω (Wollack, 1997),B and H (Angoff, 1974), and new indexes Runs and MaxStrings, plus all possible pairs and triplets of the 8 indexes using multiple comparison procedures (Dunn, 1961) to adjust the critical α level for each index in a pair or triplet. Empirical Type-I error rates and power of all indexes, pairs, and triplets were examined in a real data simulation (i.e., where actual examinee responses to items [rather than generated item response vectors] were changed to match the actual responses for randomly selected source examinees) for 2 test lengths, 9 sample sizes, 3 types of copying, 4 α levels, and 4 percentages of items copied. This study found that using both ω and H* (i.e., H with empirically derived critical values) can help improve power in the most realistic types of copying situations (strings and mixed copying). The ω-H* paired index improved power most particularly for small percentages of items copied and small amounts of copying, two conditions for which copying indexes tend to be underpowered.  相似文献   

15.
Abstract

Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were compared with rates when no multiplicity control was imposed. The results indicate that Type I error rates become severely inflated with no multiplicity control, but also that familywise error controlling procedures were extremely conservative and had very little power for detecting true relations. False discovery rate controlling procedures provided a compromise between no multiplicity control and strict familywise error control and with large sample sizes provided a high probability of making correct inferences regarding all the parameters in the model.  相似文献   

16.
The authors compared the Type I error rate and the power to detect differences in slopes and additive treatment effects of analysis of covariance (ANCOVA) and randomized block (RB) designs with a Monte Carlo simulation. For testing differences in slopes, 3 methods were compared: the test of slopes from ANCOVA, the omnibus Block × Treatment interaction, and the linear component of the Block × Treatment interaction of RB. In the test for adjusted means, 2 variations of both ANCOVA and RB were used. The power of the omnibus test of the interaction decreased dramatically as the number of blocks used increased and was always considerably smaller than the specific test of differences in slopes found in ANCOVA. Tests for means when there were concomitant differences in slopes showed that only ANCOVA uniformly controlled Type I error under all configurations of design variables. The most powerful option in almost all simulations for tests of both slopes and means was ANCOVA.  相似文献   

17.
This study examined the effect of model size on the chi-square test statistics obtained from ordinal factor analysis models. The performance of six robust chi-square test statistics were compared across various conditions, including number of observed variables (p), number of factors, sample size, model (mis)specification, number of categories, and threshold distribution. Results showed that the unweighted least squares (ULS) robust chi-square statistics generally outperform the diagonally weighted least squares (DWLS) robust chi-square statistics. The ULSM estimator performed the best overall. However, when fitting ordinal factor analysis models with a large number of observed variables and small sample size, the ULSM-based chi-square tests may yield empirical variances that are noticeably larger than the theoretical values and inflated Type I error rates. On the other hand, when the number of observed variables is very large, the mean- and variance-corrected chi-square test statistics (e.g., based on ULSMV and WLSMV) could produce empirical variances conspicuously smaller than the theoretical values and Type I error rates lower than the nominal level, and demonstrate lower power rates to reject misspecified models. Recommendations for applied researchers and future empirical studies involving large models are provided.  相似文献   

18.
This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error of approximation (RMSEA), comparative fit index (CFI), and Tucker–Lewis Index (TLI) to reject misspecified models with varying degrees of misspecification. With a sample size of 20, RMSEA, CFI, and TLI are high in both Type I and Type II error rates, whereas LRT has a high Type II error rate. With a sample size of 100, these indexes generally have satisfactory performance, but CFI and TLI are affected by a confounding effect of their baseline model. Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC) have high success rates in identifying the true model when sample size is 100. A comparison with the mixed model approach indicates that separately modeling the means and covariance structures in structural equation modeling dramatically improves the success rate of AIC and BIC.  相似文献   

19.
When factorial invariance is violated, a possible first step in locating the source of violation(s) might be to pursue partial factorial invariance (PFI). Two commonly used methods for PFI are sequential use of the modification index (backward MI method) and the factor-ratio test. In this study, we propose a simple forward method using the confidence interval (forward CI method). We compare the performances of the aforementioned 3 methods under various simulated PFI conditions. Results indicate that the forward CI method using 99% CIs has the highest perfect recovery rates and the lowest Type I error rates. A performance that is competitive with this is that produced by the backward method with the more conservative criterion (MI = 6.635). Consistently delivering the poorest performance, regardless of the chosen confidence level, was the factor-ratio test. Also discussed are the work’s contribution, implications, and limitations.  相似文献   

20.
Confirmatory factor analytic procedures are routinely implemented to provide evidence of measurement invariance. Current lines of research focus on the accuracy of common analytic steps used in confirmatory factor analysis for invariance testing. However, the few studies that have examined this procedure have done so with perfectly or near perfectly fitting models. In the present study, the authors examined procedures for detecting simulated test structure differences across groups under model misspecification conditions. In particular, they manipulated sample size, number of factors, number of indicators per factor, percentage of a lack of invariance, and model misspecification. Model misspecification was introduced at the factor loading level. They evaluated three criteria for detection of invariance, including the chi-square difference test, the difference in comparative fit index values, and the combination of the two. Results indicate that misspecification was associated with elevated Type I error rates in measurement invariance testing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号