首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When conducting longitudinal research, the investigation of between-individual differences in patterns of within-individual change can provide important insights. In this article, we use simulation methods to investigate the performance of a model-based exploratory data mining technique—structural equation model trees (SEM trees; Brandmaier, Oertzen, McArdle, & Lindenberger, 2013)—as a tool for detecting population heterogeneity. We use a latent-change score model as a data generation model and manipulate the precision of the information provided by a covariate about the true latent profile as well as other factors, including sample size, under the possible influences of model misspecifications. Simulation results show that, compared with latent growth curve mixture models, SEM trees might be very sensitive to model misspecification in estimating the number of classes. This can be attributed to the lower statistical power in identifying classes, resulting from smaller differences of parameters prescribed by the template model between classes.  相似文献   

2.
We propose a maximum likelihood framework for estimating finite mixtures of multivariate regression and simultaneous equation models with multiple endogenous variables. The proposed “semi‐parametric” approach posits that the sample of endogenous observations arises from a finite mixture of components (or latent‐classes) of unknown proportions with multiple structural relations implied by the specified model for each latent‐class. We devise an Expectation‐Maximization algorithm in a maximum likelihood framework to simultaneously estimate the class proportions, the class‐specific structural parameters, and posterior probabilities of membership of each observation into each latent‐class. The appropriate number of classes can be chosen using various information‐theoretic heuristics. A data set entailing cross‐sectional observations for a diverse sample of businesses is used to illustrate the proposed approach.  相似文献   

3.
In practice, models always have misfit, and it is not well known in what situations methods that provide point estimates, standard errors (SEs), or confidence intervals (CIs) of standardized structural equation modeling (SEM) parameters are trustworthy. In this article we carried out simulations to evaluate the empirical performance of currently available methods. We studied maximum likelihood point estimates, as well as SE estimators based on the delta method, nonparametric bootstrap (NP-B), and semiparametric bootstrap (SP-B). For CIs we studied Wald CI based on delta, and percentile and BCa intervals based on NP-B and SP-B. We conducted simulation studies using both confirmatory factor analysis and SEM models. Depending on (a) whether point estimate, SE, or CI is of interest; (b) amount of model misfit; (c) sample size; and (d) model complexity, different methods can be the one that renders best performance. Based on the simulation results, we discuss how to choose proper methods in practice.  相似文献   

4.
A paucity of research has compared estimation methods within a measurement invariance (MI) framework and determined if research conclusions using normal-theory maximum likelihood (ML) generalizes to the robust ML (MLR) and weighted least squares means and variance adjusted (WLSMV) estimators. Using ordered categorical data, this simulation study aimed to address these queries by investigating 342 conditions. When testing for metric and scalar invariance, Δχ2 results revealed that Type I error rates varied across estimators (ML, MLR, and WLSMV) with symmetric and asymmetric data. The Δχ2 power varied substantially based on the estimator selected, type of noninvariant indicator, number of noninvariant indicators, and sample size. Although some the changes in approximate fit indexes (ΔAFI) are relatively sample size independent, researchers who use the ΔAFI with WLSMV should use caution, as these statistics do not perform well with misspecified models. As a supplemental analysis, our results evaluate and suggest cutoff values based on previous research.  相似文献   

5.
Dynamic structural equation modeling (DSEM) is a novel, intensive longitudinal data (ILD) analysis framework. DSEM models intraindividual changes over time on Level 1 and allows the parameters of these processes to vary across individuals on Level 2 using random effects. DSEM merges time series, structural equation, multilevel, and time-varying effects models. Despite the well-known properties of these analysis areas by themselves, it is unclear how their sample size requirements and recommendations transfer to the DSEM framework. This article presents the results of a simulation study that examines the estimation quality of univariate 2-level autoregressive models of order 1, AR(1), using Bayesian analysis in Mplus Version 8. Three features are varied in the simulations: complexity of the model, number of subjects, and number of time points per subject. Samples with many subjects and few time points are shown to perform substantially better than samples with few subjects and many time points.  相似文献   

6.
7.
Missing data are a common problem in a variety of measurement settings, including responses to items on both cognitive and affective assessments. Researchers have shown that such missing data may create problems in the estimation of item difficulty parameters in the Item Response Theory (IRT) context, particularly if they are ignored. At the same time, a number of data imputation methods have been developed outside of the IRT framework and been shown to be effective tools for dealing with missing data. The current study takes several of these methods that have been found to be useful in other contexts and investigates their performance with IRT data that contain missing values. Through a simulation study, it is shown that these methods exhibit varying degrees of effectiveness in terms of imputing data that in turn produce accurate sample estimates of item difficulty and discrimination parameters.  相似文献   

8.
This article examines the problem of specification error in 2 models for categorical latent variables; the latent class model and the latent Markov model. Specification error in the latent class model focuses on the impact of incorrectly specifying the number of latent classes of the categorical latent variable on measures of model adequacy as well as sample reallocation to latent classes. The results show that the clarity of remaining latent classes, as measured by the entropy statistic depends on the number of observations in the omitted latent class—but this statistic is not reliable. Specification error in the latent Markov model focuses on the transition probabilities when a longitudinal Guttman process is incorrectly specified. The findings show that specifying a longitudinal Guttman process that is not true in the population impacts other transition probabilities through the covariance matrix of the logit parameters used to calculate those probabilities.  相似文献   

9.
Mediation is one concept that has shaped numerous theories. The list of problems associated with mediation models, however, has been growing. Mediation models based on cross-sectional data can produce unexpected estimates, so much so that making longitudinal or causal inferences is inadvisable. Even longitudinal mediation models have faults, as parameter estimates produced by these models are specific to the lag between observations, leading to much debate over appropriate lag selection. Using continuous time models (CTMs) rather than commonly employed discrete time models, one can estimate lag-independent parameters. We demonstrate methodology that allows for continuous time mediation analyses, with attention to concepts such as indirect and direct effects, partial mediation, the effect of lag, and the lags at which relations become maximal. A simulation compares common longitudinal mediation methods with CTMs. Reanalysis of a published covariance matrix demonstrates that CTMs can be fit to data used in longitudinal mediation studies.  相似文献   

10.
INTRODUCTIONManygeneticmodelsbasedontheapproachofANOVA (analysisofvariance)weredevel opedbyFisher(1 92 5) .Someofthesemodels,e.g .NCdesignIandII(Comstocketal.,1 952 ;Hallaueretal.,1 981 ) ,diallelmodels(Yates,1 94 7;Griffing,1 956;GardnerandE berhart,1 966) ,arestillwidelyusedbypla…  相似文献   

11.
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree‐based IRT framework for modeling responses and omissions jointly, taking into account that test takers as well as items can contribute to the two types of omissions. The proposed framework covers several existing models for missing responses, and many IRTree models can be estimated using standard statistical software. Further, simulated data is used to show that ignoring missing responses is less robust than often considered. Finally, as an illustration of its applicability, the IRTree approach is applied to data from the 2009 PISA reading assessment.  相似文献   

12.
In psychological research, available data are often insufficient to estimate item factor analysis (IFA) models using traditional estimation methods, such as maximum likelihood (ML) or limited information estimators. Bayesian estimation with common-sense, moderately informative priors can greatly improve efficiency of parameter estimates and stabilize estimation. There are a variety of methods available to evaluate model fit in a Bayesian framework; however, past work investigating Bayesian model fit assessment for IFA models has assumed flat priors, which have no advantage over ML in limited data settings. In this paper, we evaluated the impact of moderately informative priors on ability to detect model misfit for several candidate indices: posterior predictive checks based on the observed score distribution, leave-one-out cross-validation, and widely available information criterion (WAIC). We found that although Bayesian estimation with moderately informative priors is an excellent aid for estimating challenging IFA models, methods for testing model fit in these circumstances are inadequate.  相似文献   

13.
The changes in levels of mathematics anxiety among future teachers in two different mathematics materials and methods classes were investigated. The changes were a function of using: (a) Bruner's framework of developing conceptual knowledge before procedural knowledge, and (b) manipulatives to make mathematics concepts more concrete. The sample included 87 preservice teachers enrolled in mathematics methods courses. Two strategies were used to gather data both at the beginning and ending of each quarter. First, future teachers completed 98-item, Likert-type questionnaires. Second, some of the factors that influence the levels of mathematics anxiety were determined through the use of questionnaire-guided narrative interviews. Multivariate analysis of variance was employed as the quantitative measure for comparing mathematics anxiety both at the beginning and ending of the quarter. Data revealed a statistically significant reduction of mathematics anxiety levels (p < .05). Tukey's HSD was used to determine that a significant difference in mathematics anxiety levels occurred between the classes in the fall and winter quarters. Results of the study have implications for teacher education programs concerning the measurement of mathematics anxiety levels among future teachers and the determination of specific contexts in which that anxiety can be interpreted and reduced.  相似文献   

14.
This simulation study examined the performance of the curve-of-factors model (COFM) when autocorrelation and growth processes were present in the first-level factor structure. In addition to the standard curve-of factors growth model, 2 new models were examined: one COFM that included a first-order autoregressive autocorrelation parameter, and a second model that included first-order autoregressive and moving average autocorrelation parameters. The results indicated that the estimates of the overall trend in the data were accurate regardless of model specification across most conditions. Variance components estimates were biased across many conditions but improved as sample size and series length increased. In general, the two models that incorporated autocorrelation parameters performed well when sample size and series length were large. The COFM had the best overall performance.  相似文献   

15.
Growth curve modeling provides a general framework for analyzing longitudinal data from social, behavioral, and educational sciences. Bayesian methods have been used to estimate growth curve models, in which priors need to be specified for unknown parameters. For the covariance parameter matrix, the inverse Wishart prior is most commonly used due to its proper and conjugate properties. However, many researchers have pointed out that the inverse Wishart prior might not work as expected. The purpose of this study is to investigate the influence of the inverse Wishart prior and compare it with a class of separation-strategy priors on the parameter estimates of growth curve models. In this article, we illustrate the use of different types of priors with 2 real data analyses, and then conduct simulation studies to evaluate and compare these priors in estimating both linear and nonlinear growth curve models. For the linear model, the simulation study shows that both the inverse Wishart and the separation-strategy priors work well for the fixed effects parameters. For the Level 1 residual variance estimate, the separation-strategy prior performs better than the inverse Wishart prior. For the covariance matrix, the results are mixed. Overall, the inverse Wishart prior is suggested if the population correlation coefficient and at least 1 of the 2 marginal variances are large. Otherwise, the separation-strategy prior is preferred. For the nonlinear growth curve model, the separation-strategy priors work better than the inverse Wishart prior.  相似文献   

16.
New approaches based on general mixed linear models were presented for analyzing complex quantitative traits in animal models, seed models and QTL (quantitative trait locus) mapping models. Variances and covariances can be appropriately estimated by MINQUE (minimum norm quadratic unbiased estimation) approaches. Random genetic effects can be predicted without bias by LUP (linear unbiased prediction) or AUP (adjusted unbiased prediction) methods. Mixed-model based composite interval mapping (MCIM) methods are suitable for efficiently searching QTLs along the whole genome. Bayesian methods and Markov Chain Monte Carlo (MCMC) methods can be applied in analyzing parameters of random effects as well as their variances. Projects supported by NSFC (39670390, 39893350) and the NIH Grant GM32518  相似文献   

17.
Abstract

Factor mixture models are designed for the analysis of multivariate data obtained from a population consisting of distinct latent classes. A common factor model is assumed to hold within each of the latent classes. Factor mixture modeling involves obtaining estimates of the model parameters, and may also be used to assign subjects to their most likely latent class. This simulation study investigates aspects of model performance such as parameter coverage and correct class membership assignment and focuses on covariate effects, model size, and class-specific versus class-invariant parameters. When fitting true models, parameter coverage is good for most parameters even for the smallest class separation investigated in this study (0.5 SD between 2 classes). The same holds for convergence rates. Correct class assignment is unsatisfactory for the small class separation without covariates, but improves dramatically with increasing separation, covariate effects, or both. Model performance is not influenced by the differences in model size investigated here. Class-specific parameters may improve some aspects of model performance but negatively affect other aspects.  相似文献   

18.
We analyzed fifty years of inflation-adjusted data on the Annual Giving program of Princeton University. Most of the variation in both average size of gifts and percentage of class giving can be explained with simple models having three factors: reunion number, class identity, and fiscal year. Besides providing insights into factors influencing donations, these models provide a way to unmask features that are not evident in the raw data, such as trends in giving behavior and exceptional performances by particular classes in particular years.  相似文献   

19.
Popular longitudinal models allow for prediction of growth trajectories in alternative ways. In latent class growth models (LCGMs), person-level covariates predict membership in discrete latent classes that each holistically define an entire trajectory of change (e.g., a high-stable class vs. late-onset class vs. moderate-desisting class). In random coefficient growth models (RCGMs, also known as latent curve models), however, person-level covariates separately predict continuously distributed latent growth factors (e.g., an intercept vs. slope factor). This article first explains how complex and nonlinear interactions between predictors and time are recovered in different ways via LCGM versus RCGM specifications. Then a simulation comparison illustrates that, aside from some modest efficiency differences, such predictor relationships can be recovered approximately equally well by either model—regardless of which model generated the data. Our results also provide an empirical rationale for integrating findings about prediction of individual change across LCGMs and RCGMs in practice.  相似文献   

20.
Statistical theories of goodness-of-fit tests in structural equation modeling are based on asymptotic distributions of test statistics. When the model includes a large number of variables or the population is not from a multivariate normal distribution, the asymptotic distributions do not approximate the distribution of the test statistics very well at small sample sizes. A variety of methods have been developed to improve the accuracy of hypothesis testing at small sample sizes. However, all these methods have their limitations, specially for nonnormal distributed data. We propose a Monte Carlo test that is able to control Type I error with more accuracy compared to existing approaches in both normal and nonnormally distributed data at small sample sizes. Extensive simulation studies show that the suggested Monte Carlo test has a more accurate observed significance level as compared to other tests with a reasonable power to reject misspecified models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号