首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This simulation study examines the efficacy of multilevel factor mixture modeling (ML FMM) for measurement invariance testing across unobserved groups when the groups are at the between level of multilevel data. To this end, latent classes are generated with class-specific item parameters (i.e., factor loading and intercept) across the between-level classes. The efficacy of ML FMM is evaluated in terms of class enumeration, class assignment, and the detection of noninvariance. Various classification criteria such as Akaike’s information criterion, Bayesian information criterion, and bootstrap likelihood ratio tests are examined for the correct enumeration of between-level latent classes. For the detection of measurement noninvariance, free and constrained baseline approaches are compared with respect to true positive and false positive rates. This study evidences the adequacy of ML FMM. However, its performance heavily depends on the simulation factors such as the classification criteria, sample size, and the magnitude of noninvariance. Practical guidelines for applied researchers are provided.  相似文献   

2.
We present a multigroup multilevel confirmatory factor analysis (CFA) model and a procedure for testing multilevel factorial invariance in n-level structural equation modeling (nSEM). Multigroup multilevel CFA introduces a complexity when the group membership at the lower level intersects the clustered structure, because the observations in different groups but in the same cluster are not independent of one another. nSEM provides a framework in which the multigroup multilevel data structure is represented with the dependency between groups at the lower level properly taken into account. The procedure for testing multilevel factorial invariance is illustrated with an empirical example using an R package xxm2.  相似文献   

3.
Confirmatory factor analytic procedures are routinely implemented to provide evidence of measurement invariance. Current lines of research focus on the accuracy of common analytic steps used in confirmatory factor analysis for invariance testing. However, the few studies that have examined this procedure have done so with perfectly or near perfectly fitting models. In the present study, the authors examined procedures for detecting simulated test structure differences across groups under model misspecification conditions. In particular, they manipulated sample size, number of factors, number of indicators per factor, percentage of a lack of invariance, and model misspecification. Model misspecification was introduced at the factor loading level. They evaluated three criteria for detection of invariance, including the chi-square difference test, the difference in comparative fit index values, and the combination of the two. Results indicate that misspecification was associated with elevated Type I error rates in measurement invariance testing.  相似文献   

4.
This article presents a new method for multiple-group confirmatory factor analysis (CFA), referred to as the alignment method. The alignment method can be used to estimate group-specific factor means and variances without requiring exact measurement invariance. A strength of the method is the ability to conveniently estimate models for many groups. The method is a valuable alternative to the currently used multiple-group CFA methods for studying measurement invariance that require multiple manual model adjustments guided by modification indexes. Multiple-group CFA is not practical with many groups due to poor model fit of the scalar model and too many large modification indexes. In contrast, the alignment method is based on the configural model and essentially automates and greatly simplifies measurement invariance analysis. The method also provides a detailed account of parameter invariance for every model parameter in every group.  相似文献   

5.
Multigroup exploratory factor analysis (EFA) has gained popularity to address measurement invariance for two reasons. Firstly, repeatedly respecifying confirmatory factor analysis (CFA) models strongly capitalizes on chance and using EFA as a precursor works better. Secondly, the fixed zero loadings of CFA are often too restrictive. In multigroup EFA, factor loading invariance is rejected if the fit decreases significantly when fixing the loadings to be equal across groups. To locate the precise factor loading non-invariances by means of hypothesis testing, the factors’ rotational freedom needs to be resolved per group. In the literature, a solution exists for identifying optimal rotations for one group or invariant loadings across groups. Building on this, we present multigroup factor rotation (MGFR) for identifying loading non-invariances. Specifically, MGFR rotates group-specific loadings both to simple structure and between-group agreement, while disentangling loading differences from differences in the structural model (i.e., factor (co)variances).  相似文献   

6.
We illustrate testing measurement invariance in a second-order factor model using a quality of life dataset (n = 924). Measurement invariance was tested across 2 groups at a set of hierarchically structured levels: (a) configural invariance, (b) first-order factor loadings, (c) second-order factor loadings, (d) intercepts of measured variables, (e) intercepts of first-order factors, (f) disturbances of first-order factors, and (g) residual variances of observed variables. Given that measurement invariance at the factor loading and intercept levels was achieved, the latent factor mean difference on the higher order factor between the groups was also estimated. The analyses were performed on the mean and covariance structures within the framework of the confirmatory factor analysis using the LISREL 8.51 program. Implications of second-order factor models and measurement invariance in psychological research were discussed.  相似文献   

7.
Several structural equation modeling (SEM) strategies were developed for assessing measurement invariance (MI) across groups relaxing the assumptions of strict MI to partial, approximate, and partial approximate MI. Nonetheless, applied researchers still do not know if and under what conditions these strategies might provide results that allow for valid comparisons across groups in large-scale comparative surveys. We perform a comprehensive Monte Carlo simulation study to assess the conditions under which various SEM methods are appropriate to estimate latent means and path coefficients and their differences across groups. We find that while SEM path coefficients are relatively robust to violations of full MI and can be rather effectively recovered, recovering latent means and their group rankings might be difficult. Our results suggest that, contrary to some previous recommendations, partial invariance may rather effectively recover both path coefficients and latent means even when the majority of items are noninvariant. Although it is more difficult to recover latent means using approximate and partial approximate MI methods, it is possible under specific conditions and using appropriate models. These models also have the advantage of providing accurate standard errors. Alignment is recommended for recovering latent means in cases where there are only a few noninvariant parameters across groups.  相似文献   

8.
The purposes of this study were to (a) test the hypothesized factor structure of the Student-Teacher Relationship Scale (STRS; Pianta, 2001) for 308 African American (AA) and European American (EA) children using confirmatory factor analysis (CFA) and (b) examine the measurement invariance of the factor structure across AA and EA children. CFA of the hypothesized three-factor model with correlated latent factors did not yield an optimal model fit. Parameter estimates obtained from CFA identified items with low factor loadings and R2 values, suggesting that content revision is required for those items on the STRS. Deletion of two items from the scale yielded a good model fit, suggesting that the remaining 26 items reliably and validly measure the constructs for the whole sample. Tests for configural invariance, however, revealed that the underlying constructs may differ for AA and EA groups. Subsequent exploratory factor analyses (EFAs) for AA and EA children were carried out to investigate the comparability of the measurement model of the STRS across the groups. The results of EFAs provided evidence suggesting differential factor models of the STRS across AA and EA groups. This study provides implications for construct validity research and substantive research using the STRS given that the STRS is extensively used in intervention and research in early childhood education.  相似文献   

9.
We present a test for cluster bias, which can be used to detect violations of measurement invariance across clusters in 2-level data. We show how measurement invariance assumptions across clusters imply measurement invariance across levels in a 2-level factor model. Cluster bias is investigated by testing whether the within-level factor loadings are equal to the between-level factor loadings, and whether the between-level residual variances are zero. The test is illustrated with an example from school research. In a simulation study, we show that the cluster bias test has sufficient power, and the proportions of false positives are close to the chosen levels of significance.  相似文献   

10.

The aim of the study is to investigate the measurement invariance of mathematics self-concept and self-efficacy across 40 countries that participated in the Programme for International Student Assessment (PISA) 2003 and 2012 cycles. The sample of the study consists of 271,760 students in PISA 2003 and 333,804 students in PISA 2012. Firstly, the traditional measurement invariance testing was applied in the multiple-group confirmatory factor analysis (MGCFA). Then, the alignment analyses were performed, allowing non-invariance to a minimum to estimate all of the parameters. Results from MGCFA indicate that mathematics self-concept and self-efficacy hold metric invariance across the 80 groups (cycle by country). The alignment method results suggest that a large proportion of non-invariance exists in both mathematics self-concept and self-efficacy factors, and the factor means cannot be compared across all participating countries. Results of the Monte Carlo simulation show that the alignment results are trustworthy. Implications and limitations are discussed, and some recommendations for future research are proposed.

  相似文献   

11.
Multigroup confirmatory factor analysis (MCFA) is a popular method for the examination of measurement invariance and specifically, factor invariance. Recent research has begun to focus on using MCFA to detect invariance for test items. MCFA requires certain parameters (e.g., factor loadings) to be constrained for model identification, which are assumed to be invariant across groups, and act as referent variables. When this invariance assumption is violated, location of the parameters that actually differ across groups becomes difficult. The factor ratio test and the stepwise partitioning procedure in combination have been suggested as methods to locate invariant referents, and appear to perform favorably with real data examples. However, the procedures have not been evaluated through simulations where the extent and magnitude of a lack of invariance is known. This simulation study examines these methods in terms of accuracy (i.e., true positive and false positive rates) of identifying invariant referent variables.  相似文献   

12.
Measurement bias can be detected using structural equation modeling (SEM), by testing measurement invariance with multigroup factor analysis (Jöreskog, 1971;Meredith, 1993;Sörbom, 1974) MIMIC modeling (Muthén, 1989) or restricted factor analysis (Oort, 1992,1998). In educational research, data often have a nested, multilevel structure, for example when data are collected from children in classrooms. Multilevel structures might complicate measurement bias research. In 2-level data, the potentially “biasing trait” or “violator” can be a Level 1 variable (e.g., pupil sex), or a Level 2 variable (e.g., teacher sex). One can also test measurement invariance with respect to the clustering variable (e.g., classroom). This article provides a stepwise approach for the detection of measurement bias with respect to these 3 types of violators. This approach works from Level 1 upward, so the final model accounts for all bias and substantive findings at both levels. The 5 proposed steps are illustrated with data of teacher–child relationships.  相似文献   

13.
Treating Likert rating scale data as continuous outcomes in confirmatory factor analysis violates the assumption of multivariate normality. Given certain requirements pertaining to the number of categories, skewness, size of the factor loadings, and so forth, it seems nevertheless possible to recover true parameter values if the data stem from a single homogeneous population. It is shown that, in a multigroup context, an analysis of Likert data under the assumption of multivariate normality may distort the factor structure differently across groups. In that case, investigations of measurement invariance (MI), which are necessary for meaningful group comparisons, are problematic. Analyzing subscale scores computed from Likert items does not seem to solve the problem.  相似文献   

14.
Social‐emotional health influences youth developmental trajectories and there is growing interest among educators to measure the social‐emotional health of the students they serve. This study replicated the psychometric characteristics of the Social Emotional Health Survey (SEHS) with a diverse sample of high school students (Grades 9–12; N = 14,171), and determined whether the factor structure was invariant across sociocultural and gender groups. A confirmatory factor analysis (CFA) tested the fit of the previously known factor structure, and then structural equation modeling was used to test invariance across sociocultural and gender groups through multigroup CFAs. Results supported the SEHS measurement model, with full invariance of the SEHS higher‐order structure for all five sociocultural groups. There were no moderate effect size or higher group differences on the overall index for sociocultural or gender groups, which lends support to the eventual development of common norms and universal interpretation guidelines.  相似文献   

15.
Confirmatory factor analytic tests of measurement invariance (MI) require a referent indicator (RI) for model identification. Although the assumption that the RI is perfectly invariant across groups is acknowledged as problematic, the literature provides relatively little guidance for researchers to identify the conditions under which the practice is appropriate. Using simulated data, this study examined the effects of RI selection on both scale- and item-level MI tests. Results indicated that while inappropriate RI selection has little effect on the accuracy of conclusions drawn from scale-level tests of metric invariance, poor RI choice can produce very misleading results for item-level tests. As a result, group comparisons under conditions of partial invariance are highly susceptible to problems associated with poor RI choice.  相似文献   

16.
In this research, the authors raised the issue that prior studies had failed to address the nested structure of data in examining the construct validity of an instrument measuring students' behavioral and emotional participation in academic activities in the classroom. To address this question, the authors illustrated the utility of the multilevel confirmatory factor analysis (MCFA) approach to reexamine the construct validity of this instrument. The sample consisted of 2,041 students in 5th grade from 67 classes in Hong Kong. First, the results justified the requirement of MCFA and indicated that the 4-factor model tested with MCFA provided better fit to the data than that tested with a single-level confirmatory factor analysis (CFA). Second, the study also provided adequate support for a multilevel second-order two-factor model that distinguished engagement from disaffection. Third, the factor structure was invariant across the student level and the classroom level for both the 4-factor model and the second-order two-factor model. Fourth, the results highlighted the presence of ambiguity in differentiating between the dimensions at the classroom level and supported the unidimensionality of the classroom-level construct. Fifth, student engagement was significantly and positively correlated with mathematics test scores, teachers' classroom-management practices, teacher support, and student order in the classroom. Finally, the authors discuss the implications of the study and its limitations and offer suggestions for model selection and explorations for future research.  相似文献   

17.
When modeling latent variables at multiple levels, it is important to consider the meaning of the latent variables at the different levels. If a higher-level common factor represents the aggregated version of a lower-level factor, the associated factor loadings will be equal across levels. However, many researchers do not consider cross-level invariance constraints in their research. Not applying these constraints when in fact they are appropriate leads to overparameterized models, and associated convergence and estimation problems. This simulation study used a two-level mediation model on common factors to show that when factor loadings are equal in the population, not applying cross-level invariance constraints leads to more estimation problems and smaller true positive rates. Some directions for future research on cross-level invariance in MLSEM are discussed.  相似文献   

18.
This study explored the validity of the Utrecht Work Engagement Scale in a sample of 853 practicing teachers from Australia, Canada, China (Hong Kong), Indonesia, and Oman. The authors used multigroup confirmatory factor analysis to test the factor structure and measurement invariance across settings, after which they examined the relationships between work engagement, workplace well-being (job satisfaction and quitting intention), and contextual variables (socioeconomic status, experience, and gender). The 1-factor version of the Utrecht Work Engagement Scale was deemed preferable to the 3-factor version and showed acceptable fit to the cross-national data. The 1-factor Utrecht Work Engagement Scale showed good internal consistency and similar relationships with workplace well-being and contextual variables across settings. The Utrecht Work Engagement Scale was invariant within broadly construed Western and non-Western groups but not across Western and non-Western groups. The authors concluded that the Utrecht Work Engagement Scale needs further development before its use can be supported in further cross-cultural research.  相似文献   

19.
Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases in which models without mean restrictions (i.e., saturated mean structure) are compared to models with restricted (i.e., modeled) means, one should take account of the presence of means, even if the model is saturated with respect to the means. The failure to do this can result in an incorrect rank order of models in terms of the information fit indexes. We demonstrate this point by an analysis of measurement invariance in a multigroup confirmatory factor model.  相似文献   

20.
Measurement invariance with respect to groups is an essential aspect of the fair use of scores of intelligence tests and other psychological measurements. It is widely believed that equal factor loadings are sufficient to establish measurement invariance in confirmatory factor analysis. Here, it is shown why establishing measurement invariance with confirmatory factor analysis requires a statistical test of the equality over groups of measurement intercepts. Without this essential test, measurement bias may be overlooked. A re-analysis of a study by Te Nijenhuis, Tolboom, Resing, and Bleichrodt (2004) on ethnic differences on the RAKIT IQ test illustrates that ignoring intercept differences may lead to the conclusion that bias of IQ tests with respect to minorities is small, while in reality bias is quite severe.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号