首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The aims of this study were to present a method for developing a path analytic network model using data acquired from positron emission tomography. Regions of interest within the human brain were identified through quantitative activation likelihood estimation meta-analysis. Using this information, a “true” or population path model was then developed using Bayesian structural equation modeling. To evaluate the impact of sample size on parameter estimation bias, proportion of parameter replication coverage, and statistical power, a 2 group (clinical/control) × 6 (sample size: N = 10, N = 15, N = 20, N = 25, N = 50, N = 100) Markov chain Monte Carlo study was conducted. Results indicate that using a sample size of less than N = 15 per group will produce parameter estimates exhibiting bias greater than 5% and statistical power below .80.  相似文献   

2.
To infer longitudinal relationships among latent factors, traditional analyses assume that the measurement model is invariant across measurement occasions. Alternative to placing cross-occasion equality constraints on parameters, approximate measurement invariance (MI) can be analyzed by specifying informative priors on parameter differences between occasions. This study evaluated the estimation of structural coefficients in multiple-indicator autoregressive cross-lagged models under various conditions of approximate MI using Bayesian structural equation modeling. Design factors included factor structures, conditions of non-invariance, sizes of structural coefficients, and sample sizes. Models were analyzed using two sets of small-variance priors on select model parameters. Results showed that autoregressive coefficient estimates were more accurate for the mixed pattern than the decreasing pattern of non-invariance. When a model included cross-loadings, an interaction was found between the cross-lagged estimates and the non-invariance conditions. Implications of findings and future research directions are discussed.  相似文献   

3.
Conventionally, moderated mediation analysis is conducted through adding relevant interaction terms into a mediation model of interest. In this study, we illustrate how to conduct moderated mediation analysis by directly modeling the relation between the indirect effect components including a and b and the moderators, to permit easier specification and interpretation of moderated mediation. With this idea, we introduce a general moderated mediation model that can be used to model many different moderated mediation scenarios including the scenarios described in Preacher, Rucker, and Hayes (2007). Then we discuss how to estimate and test the conditional indirect effects and to test whether a mediation effect is moderated using Bayesian approaches. How to implement the estimation in both BUGS and Mplus is also discussed. Performance of Bayesian methods is evaluated and compared to that of frequentist methods including maximum likelihood (ML) with 1st-order and 2nd-order delta method standard errors and mL with bootstrap (percentile or bias-corrected confidence intervals) via a simulation study. The results show that Bayesian methods with diffuse (vague) priors implemented in both BUGS and Mplus yielded unbiased estimates, higher power than the ML methods with delta method standard errors, and the ML method with bootstrap percentile confidence intervals, and comparable power to the ML method with bootstrap bias-corrected confidence intervals. We also illustrate the application of these methods with the real data example used in Preacher et al. (2007). Advantages and limitations of applying Bayesian methods to moderated mediation analysis are also discussed.  相似文献   

4.
As Bayesian methods continue to grow in accessibility and popularity, more empirical studies are turning to Bayesian methods to model small sample data. Bayesian methods do not rely on asympotics, a property that can be a hindrance when employing frequentist methods in small sample contexts. Although Bayesian methods are better equipped to model data with small sample sizes, estimates are highly sensitive to the specification of the prior distribution. If this aspect is not heeded, Bayesian estimates can actually be worse than frequentist methods, especially if frequentist small sample corrections are utilized. We show with illustrative simulations and applied examples that relying on software defaults or diffuse priors with small samples can yield more biased estimates than frequentist methods. We discuss conditions that need to be met if researchers want to responsibly harness the advantages that Bayesian methods offer for small sample problems as well as leading small sample frequentist methods.  相似文献   

5.
Dynamic structural equation modeling (DSEM) is a novel, intensive longitudinal data (ILD) analysis framework. DSEM models intraindividual changes over time on Level 1 and allows the parameters of these processes to vary across individuals on Level 2 using random effects. DSEM merges time series, structural equation, multilevel, and time-varying effects models. Despite the well-known properties of these analysis areas by themselves, it is unclear how their sample size requirements and recommendations transfer to the DSEM framework. This article presents the results of a simulation study that examines the estimation quality of univariate 2-level autoregressive models of order 1, AR(1), using Bayesian analysis in Mplus Version 8. Three features are varied in the simulations: complexity of the model, number of subjects, and number of time points per subject. Samples with many subjects and few time points are shown to perform substantially better than samples with few subjects and many time points.  相似文献   

6.
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary T and N by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time series analysis (T large and N = 1) and conventional SEM (N large and T = 1 or small) by integrating both approaches. The resulting combined model offers a variety of new modeling options including a direct test of the ergodicity hypothesis, according to which the factorial structure of an individual observed at many time points is identical to the factorial structure of a group of individuals observed at a single point in time. Third, we illustrate the flexibility of SEM time series modeling by extending the approach to account for complex error structures. We end with a discussion of current limitations and future applications of SEM-based time series modeling for arbitrary T and N.  相似文献   

7.
This simulation study examined the performance of the curve-of-factors model (COFM) when autocorrelation and growth processes were present in the first-level factor structure. In addition to the standard curve-of factors growth model, 2 new models were examined: one COFM that included a first-order autoregressive autocorrelation parameter, and a second model that included first-order autoregressive and moving average autocorrelation parameters. The results indicated that the estimates of the overall trend in the data were accurate regardless of model specification across most conditions. Variance components estimates were biased across many conditions but improved as sample size and series length increased. In general, the two models that incorporated autocorrelation parameters performed well when sample size and series length were large. The COFM had the best overall performance.  相似文献   

8.
Abstract

Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different priors on covariate balance are evaluated and the differences between frequentist and Bayesian covariate balance are discussed. Results of the case study reveal that both the Bayesian and frequentist propensity score approaches achieve good covariate balance. The frequentist propensity score approach performs slightly better on covariate balance for stratification and weighting methods, whereas the two-step Bayesian approach offers slightly better covariate balance in the optimal full matching method. Results of a comprehensive simulation study reveal that accuracy and precision of prior information on propensity score model parameters do not greatly influence balance performance. Results of the simulation study also show that overall, the optimal full matching method provides the best covariate balance and treatment effect estimates compared to the stratification and weighting methods. A unique feature of covariate balance within Bayesian propensity score analysis is that we can obtain a distribution of balance indices in addition to the point estimates so that the variation in balance indices can be naturally captured to assist in covariate balance checking.  相似文献   

9.
Multilevel Structural equation models are most often estimated from a frequentist framework via maximum likelihood. However, as shown in this article, frequentist results are not always accurate. Alternatively, one can apply a Bayesian approach using Markov chain Monte Carlo estimation methods. This simulation study compared estimation quality using Bayesian and frequentist approaches in the context of a multilevel latent covariate model. Continuous and dichotomous variables were examined because it is not yet known how different types of outcomes—most notably categorical—affect parameter recovery in this modeling context. Within the Bayesian estimation framework, the impact of diffuse, weakly informative, and informative prior distributions were compared. Findings indicated that Bayesian estimation may be used to overcome convergence problems and improve parameter estimate bias. Results highlight the differences in estimation quality between dichotomous and continuous variable models and the importance of prior distribution choice for cluster-level random effects.  相似文献   

10.
Research in regularization, as applied to structural equation modeling (SEM), remains in its infancy. Specifically, very little work has compared regularization approaches across both frequentist and Bayesian estimation. The purpose of this study was to address just that, demonstrating both similarity and distinction across estimation frameworks, while specifically highlighting more recent developments in Bayesian regularization. This is accomplished through the use of two empirical examples that demonstrate both ridge and lasso approaches across both frequentist and Bayesian estimation, along with detail regarding software implementation. We conclude with a discussion of future research, advocating for increased evaluation and synthesis across both Bayesian and frequentist frameworks.  相似文献   

11.
Biclustering is a method of grouping objects and attributes simultaneously in order to find multiple hidden patterns.When dealing with a long time series,there is a low possibility of finding meaningful clusters of whole time sequence.However,we may find more significant clusters containing partial time sequence by applying a biclustering method.This paper proposed a new biclustering algorithm for time series data following an autoregressive moving average (ARMA) model.We assumed the plaid model but modified the algorithm to incorporate the sequential nature of time series data.The maximum likelihood estimation (MLE) method was used to estimate coefficients of ARMA in each bicluster.We applied the proposed method to several synthetic data which were generated from different ARMA orders.Results from the experiments showed that the proposed method compares favorably with other biclustering methods for time series data.  相似文献   

12.
In this article, we discuss the benefits of Bayesian statistics and how to utilize them in studies of moral education. To demonstrate concrete examples of the applications of Bayesian statistics to studies of moral education, we reanalyzed two data sets previously collected: one small data set collected from a moral educational intervention experiment, and one big data set from a large-scale Defining Issues Test-2 survey (DIT). The results suggest that Bayesian analysis of data sets collected from moral educational studies can provide additional useful statistical information, particularly that associated with the strength of evidence supporting alternative hypotheses, which has not been provided by the classical frequentist approach focusing on P-values. Finally, we introduce several practical guidelines pertaining to how to utilize Bayesian statistics, including the utilization of newly developed free statistical software, Jeffrey’s Amazing Statistics Program (JASP), and thresholding based on Bayes Factors (BF), to scholars in the field of moral education.  相似文献   

13.
This study investigates gender differences in basic numerical skills that are predictive of math achievement. Previous research in this area is inconsistent and has relied upon traditional hypothesis testing, which does not allow for assertive conclusions to be made regarding nonsignificant findings. This study is the first to compare male and female performance (= 1,391; ages 6–13) on many basic numerical tasks using both Bayesian and frequentist analyses. The results provide strong evidence of gender similarities on the majority of basic numerical tasks measured, suggesting that a male advantage in foundational numerical skills is the exception rather than the rule.  相似文献   

14.
This article examines Bayesian model averaging as a means of addressing predictive performance in Bayesian structural equation models. The current approach to addressing the problem of model uncertainty lies in the method of Bayesian model averaging. We expand the work of Madigan and his colleagues by considering a structural equation model as a special case of a directed acyclic graph. We then provide an algorithm that searches the model space for submodels and obtains a weighted average of the submodels using posterior model probabilities as weights. Our simulation study provides a frequentist evaluation of our Bayesian model averaging approach and indicates that when the true model is known, Bayesian model averaging does not yield necessarily better predictive performance compared to nonaveraged models. However, our case study using data from an international large-scale assessment reveals that the model-averaged submodels provide better posterior predictive performance compared to the initially specified model.  相似文献   

15.
Traditional studies on integrated statistical process control and engineering process control (SPC-EPC) are based on linear autoregressive integrated moving average (ARIMA) time series models to describe the dynamic noise of the system.However,linear models sometimes are unable to model complex nonlinear autocorrelation.To solve this problem,this paper presents an integrated SPC-EPC method based on smooth transition autoregressive (STAR) time series model,and builds a minimum mean squared error (MMSE) controller as well as an integrated SPC-EPC control system.The performance of this method for checking the trend and sustained shift is analyzed.The simulation results indicate that this integrated SPC-EPC control method based on STAR model is effective in controlling complex nonlinear systems.  相似文献   

16.
The aprioristic (classical, naïve and symmetric) and frequentist interpretations of probability are commonly known. Bayesian or subjective interpretation of probability is receiving increasing attention. This paper describes an activity to help students differentiate between the three types of probability interpretations.  相似文献   

17.
非线性模型中无信息方差和协方差分量Bayes估计   总被引:1,自引:1,他引:0  
采用Bayes方法从无先验信息出发,得到了非线性模型中方差和协方差分量的估计(包含相关系数),最后通过实例解算,结果表明:非线性模型中方差和协方差分量的估计,与ρ的理论值-0.5偏差不大,当没有先验信息时,该方法是可行的.  相似文献   

18.
Despite its importance to structural equation modeling, model evaluation remains underdeveloped in the Bayesian SEM framework. Posterior predictive p-values (PPP) and deviance information criteria (DIC) are now available in popular software for Bayesian model evaluation, but they remain underutilized. This is largely due to the lack of recommendations for their use. To address this problem, PPP and DIC were evaluated in a series of Monte Carlo simulation studies. The results show that both PPP and DIC are influenced by severity of model misspecification, sample size, model size, and choice of prior. The cutoffs PPP < 0.10 and ?DIC > 7 work best in the conditions and models tested here to maintain low false detection rates and misspecified model selection rates, respectively. The recommendations provided in this study will help researchers evaluate their models in a Bayesian SEM analysis and set the stage for future development and evaluation of Bayesian SEM fit indices.  相似文献   

19.
Results of the TONI, WISC-R, and WRAT were compared for a sample of 66 learning disabled children: 51 males (32 white, 19 black) and 15 females (9 white, 6 black) whose mean age was 9–5 (SD = 1–10). The mean score of the TONI was significantly different from the Performance IQ. Nonsignificant differences were found between the TONI and Full Scale IQ and between the TONI and Verbal IQ. Correlation coefficients between the TONI and WISC-R ranged from a low of .35 for the Verbal IQ to .44 for both the Full Scale and Performance IQs. The correlation coefficients between the TONI and standard scores of the WRAT were .38, .27, and .23, for Reading, Spelling, and Arithmetic, respectively. Implications of these findings are discussed.  相似文献   

20.
McNeil NM 《Child development》2008,79(5):1524-1537
Do typical arithmetic problems hinder learning of mathematical equivalence? Second and third graders (7–9 years old; N= 80) received lessons on mathematical equivalence either with or without typical arithmetic problems (e.g., 15 + 13 = 28 vs. 28 = 28, respectively). Children then solved math equivalence problems (e.g., 3 + 9 + 5 = 6 + __), switched lesson conditions, and solved math equivalence problems again. Correct solutions were less common following instruction with typical arithmetic problems. In a supplemental experiment, fifth graders (10–11 years old; N= 19) gave fewer correct solutions after a brief intervention on mathematical equivalence that included typical arithmetic problems. Results suggest that learning is hindered when lessons activate inappropriate existing knowledge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号