首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
讨论了输入、输出及回归系数都是LR-型模糊数的模糊线性回归模型参数估计的加权最小二乘法.该方法根据决策者对训练数据的置信度对观测数据设置不同的权重,从而得到能有效抵御异常值干扰的预测模型.  相似文献   

2.
Two conventional scores and a weighted score on a group test of general intelligence were compared for reliability and predictive validity. One conventional score consisted of the number of correct answers an examinee gave in responding to 69 multiple-choice questions; the other was the formula score obtained by subtracting from the number of correct answers a fraction of the number of wrong answers. A weighted score was obtained by assigning weights to all the response alternatives of all the questions and adding the weights associated with the responses, both correct and incorrect, made by the examinee. The weights were derived from degree-of-correctness judgments of the set of response alternatives to each question. Reliability was estimated using a split-half procedure; predictive validity was estimated from the correlation between test scores and mean school achievement. Both conventional scores were found to be significantly less reliable but significantly more valid than the weighted scores. (The formula scores were neither significantly less reliable nor significantly more valid than number-correct scores.)  相似文献   

3.
Competence data from low‐stakes educational large‐scale assessment studies allow for evaluating relationships between competencies and other variables. The impact of item‐level nonresponse has not been investigated with regard to statistics that determine the size of these relationships (e.g., correlations, regression coefficients). Classical approaches such as ignoring missing values or treating them as incorrect are currently applied in many large‐scale studies, while recent model‐based approaches that can account for nonignorable nonresponse have been developed. Estimates of item and person parameters have been demonstrated to be biased for classical approaches when missing data are missing not at random (MNAR). In our study, we focus on parameter estimates of the structural model (i.e., the true regression coefficient when regressing competence on an explanatory variable), simulating data according to various missing data mechanisms. We found that model‐based approaches and ignoring missing values performed well in retrieving regression coefficients even when we induced missing data that were MNAR. Treating missing values as incorrect responses can lead to substantial bias. We demonstrate the validity of our approach empirically and discuss the relevance of our results.  相似文献   

4.
在给定的权回归模型下,讨论了最小二乘估计、最优加权最小二乘估计和线性无偏最小方差估计的性能比较,得出了在随机误差方差矩阵可逆条件下,可算出最优加权最小二乘估计与线性无偏最小方差估计误差方差阵的差表达式,并在一定条件下,两者趋于一致。  相似文献   

5.
Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models to ordinal data. This assumes that the observed measures have normal distributions, which is not the case when the variables are ordinal. A better approach is to use polychoric correlations and fit the models using methods such as unweighted least squares (ULS), maximum likelihood (ML), weighted least squares (WLS), or diagonally weighted least squares (DWLS). In this simulation evaluation we study the behavior of these methods in combination with polychoric correlations when the models are misspecified. We also study the effect of model size and number of categories on the parameter estimates, their standard errors, and the common chi-square measures of fit when the models are both correct and misspecified. When used routinely, these methods give consistent parameter estimates but ULS, ML, and DWLS give incorrect standard errors. Correct standard errors can be obtained for these methods by robustification using an estimate of the asymptotic covariance matrix W of the polychoric correlations. When used in this way the methods are here called RULS, RML, and RDWLS.  相似文献   

6.
The purpose of this article is to examine the use of sample weights in the latent variable modeling context. A sample weight is the inverse of the probability that the unit in question was sampled and is used to obtain unbiased estimates of population parameters when units have unequal probabilities of inclusion in a sample. Although sample weights are discussed at length in survey research literature, virtually no discussion of sample weights can be found in the latent variable modeling literature. This article examines sample weights in latent variable models applied to the case where a simple random sample is drawn from a population containing a mixture of strata. A bootstrap simulation study is used to compare raw and normalized sample weights to conditions where weights are ignored. The results show that ignoring weights can lead to serious bias in latent variable model parameters and that this bias is mitigated by the incorporation of sample weights. Standard errors appear to be underestimated when sample weights are applied. Results on goodness‐of‐fit statistics demonstrate the advantages of utilizing sample weights.  相似文献   

7.
将项目权值引入传统关联规则挖掘中是在项目属性上的扩展。本文分析项目权值对加权关联规则挖掘的影响,并对加权关联规则现有的算法进行总结,同时比较各算法的优缺点。最后对加权关联规则的未来研究发展方向进行探讨。  相似文献   

8.
Oversampling and cluster sampling must be addressed when analyzing complex sample data. This study: (a) compares parameter estimates when applying weights versus not applying weights; (b) examines subset selection issues; (c) compares results when using standard statistical software (SPSS) versus specialized software (AM); and (d) offers recommendations for analyzing complex sample data. Underestimated standard errors and overestimated test statistics were produced when both the oversampled and cluster sample characteristics of the data were ignored. Regarding subset analysis, marked differences were not evident in SPSS results, but the standard errors of the weighted versus unweighted models became more similar as smaller subsets of the data were extracted using AM. Recommendations to researchers are provided including accommodating both oversampling and cluster sampling.  相似文献   

9.
This study used Monte Carlo methods to investigate the accuracy and utility of estimators of overall error and error due to approximation in structural equation models. The effects of sample size, indicator reliabilities, and degree of misspecification were examined. The rescaled noncentrality parameter (McDonald & Marsh, 1990) was examined as a measure of approximation error, whereas the one‐ and two‐sample cross‐validation indices and a sample estimator of overall error (EFo) proposed by Browne and Cudeck (1989, 1993) were presented as measures of overall error. The rescaled noncentrality parameter and EFo provided extremely accurate estimates of the amounts of approximation and overall error, respectively. However, although models with errors of omission produced larger estimates of approximation and overall error, the presence of errors of inclusion had little or no effect on estimates of either type of error. The cross‐validation indices and sample estimator of overall error reached minimum values for the same model as an empirically derived measure of overall error only for models with large amounts of specification error. Implications for the use of these estimators in choosing among competing models were discussed.  相似文献   

10.
对过程控制对象建模及数据处理问题,目前高专有关教材仍以阶跃响应结合作图法为主,很少介绍以计算机为基础的现代数据处理及实验方法,原因在于算法复杂。这里提出一种基于最小二乘法的建模方法;由对象的传递函数求得差分方程,然后使用最小二乘法做多元线性回归取得参数,并给出了程序结构与仿真结果。  相似文献   

11.
基于全局QoS感知的服务组合可以通过加权聚合的方式构建为一个单目标优化问题,此时需要一个合适的方法去估算权重,为此提出了一种基于非均匀空间理想加权距离的权重合成方法.该方法考虑用户的全局定性需求,首先运用G1法生成主观权重,然后使用熵权法计算各组合点上QoS属性的内在客观权重;最后给出非均匀空间距离计算方法,结合逼近理想法设计合成权重.构建于QWS数据集上的实验,阐述了合成权重的计算过程和评估了相关参数对权重的影响.结果表明,合成权重能够综合反映用户的偏好和QoS属性的内在特征.  相似文献   

12.
Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several estimation methods for different measurement models using simulation techniques. Three types of estimation approach were conceptualized for generalizability theory (GT) and item response theory (IRT): item score approach (ISA), testlet score approach (TSA), and item-nested-testlet approach (INTA). The magnitudes of overestimation when applying item-based methods ranged from 0.02 to 0.06 and were related to the degrees of dependence among within-testlet items. Reliability estimates from TSA were lower than those from INTA due to the loss of information with IRT approaches. However, this could not be applied in GT. Specified methods in IRT produced higher reliability estimates than those in GT using the same approach. Relatively smaller magnitudes of error in reliability estimates were observed for ISA and for methods in IRT. Thus, it seems reasonable to use TSA as well as INTA for both GT and IRT. However, if there is a relatively large dependence among within-testlet items, INTA should be considered for IRT due to nonnegligible loss of information.  相似文献   

13.
This study analyzed the magnitude of experimental intervention outcomes as a function of violations in internal and external validity for studies that included students with learning disabilities. The results indicated that treatment outcomes were significantly affected by the following violations: teacher effects, establishing criterion levels of instructional performance, reliance on experimental measures, using different measures between pretest and posttest, using a sample heterogenous in age, and using incorrect units of analysis. Furthermore, the underreporting of information related to ethnicity, locale of the study, psychometric data, and teacher applications positively inflated the magnitude of treatment outcomes. A weighted hierarchical regression analysis revealed that composite scores of the aforementioned high-risk variables accounted for 16% of the total variance in effect size. The implications for interpreting intervention research to practice are discussed.  相似文献   

14.
This study compared diagonal weighted least squares robust estimation techniques available in 2 popular statistical programs: diagonal weighted least squares (DWLS; LISREL version 8.80) and weighted least squares–mean (WLSM) and weighted least squares—mean and variance adjusted (WLSMV; Mplus version 6.11). A 20-item confirmatory factor analysis was estimated using item-level ordered categorical data. Three different nonnormality conditions were applied to 2- to 7-category data with sample sizes of 200, 400, and 800. Convergence problems were seen with nonnormal data when DWLS was used with few categories. Both DWLS and WLSMV produced accurate parameter estimates; however, bias in standard errors of parameter estimates was extreme for select conditions when nonnormal data were present. The robust estimators generally reported acceptable model–data fit, unless few categories were used with nonnormal data at smaller sample sizes; WLSMV yielded better fit than WLSM for most indices.  相似文献   

15.
It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing for consistent estimation of the relevant regression parameters. In many instances, however, embedding the measurement model into structural equation models is not possible because the model would not be identified. To correct for measurement error one has no other recourse than to provide the exact values of the variances of the measurement error terms of the model, although in practice such variances cannot be ascertained exactly, but only estimated from an independent study. The usual approach so far has been to treat the estimated values of error variances as if they were known exact population values in the subsequent structural equation modeling (SEM) analysis. In this article we show that fixing measurement error variance estimates as if they were true values can make the reported standard errors of the structural parameters of the model smaller than they should be. Inferences about the parameters of interest will be incorrect if the estimated nature of the variances is not taken into account. For general SEM, we derive an explicit expression that provides the terms to be added to the standard errors provided by the standard SEM software that treats the estimated variances as exact population values. Interestingly, we find there is a differential impact of the corrections to be added to the standard errors depending on which parameter of the model is estimated. The theoretical results are illustrated with simulations and also with empirical data on a typical SEM model.  相似文献   

16.
The log-odds ratio (ln[OR]) is commonly used to quantify treatments' effects on dichotomous outcomes and then pooled across studies using inverse-variance (1/v) weights. Calculation of the ln[OR]'s variance requires four cell frequencies for two groups crossed with values for dichotomous outcomes. While primary studies report the total sample size (n..), many do not report all four frequencies. Using real data, we demonstrated pooling of ln[OR]s using n.. versus 1/v weights. In a simulation study we compared two weighting approaches under several conditions. Efficiency and Type I error rates for 1/v versus n.. weights used to pool ln[OR] estimates depended on sample size and the percent of studies missing cell frequencies. Results are discussed and guidelines for applied meta-analysts are provided.  相似文献   

17.
Over the past decade and a half, methodologists working with structural equation modeling (SEM) have developed approaches for accommodating multilevel data. These approaches are particularly helpful when modeling data that come from complex sampling designs. However, most data sets that are associated with complex sampling designs also include observation weights, and methods to incorporate these sampling weights into multilevel SEM analyses have not been addressed. This article investigates the use of different weighting techniques and finds, through a simulation study, that the use of an effective sample size weight provides unbiased estimates of key parameters and their sampling variances. Also, a popular normalization technique of scaling weights to reflect the actual sample size is shown to produce negatively biased sampling variance estimates, as well as negatively biased within-group variance parameter estimates in the small group size case.  相似文献   

18.
定常粘弹性流体的最小二乘混合有限元   总被引:2,自引:0,他引:2  
针对于定常的服从OldroydB型本构律的粘弹性流体流动建立了一种最小二乘混合有限元方法.应力、速度分别采用不连续分片k多次项式pk,连续分片k 1次多项式pk 1(k>0).分析了逼近问题的解的存在性并给出了逼近解的误差估计.  相似文献   

19.
结合支持向量机和神经网络各自的优点,提出了一种新颖的自适应支持向量回归神经网络(SVR-NN).首先,利用支持向量回归方法确定SVR-NN的初始结构和初始化权值,基于支持向量自适应地构造SVR-NN神经网络的隐层节点;然后,使用退火过程的鲁棒学习算法更新网络节点参数和权值.为了验证所提出方法的有效性,给出了自适应SVR-NN应用于非线性动态系统辨识的实例.仿真结果表明,与以前的神经网络方法相比,基于SVR-NN网络的辨识方案能获得相当好的性能,它具有很快的收敛速度.因此,自适应的SVR-NN为非线性系统辨识提供了极有吸引力的新途径.  相似文献   

20.
This paper provides theoretical foundation for the problem of localization in multi-robot formations. Sufficient and necessary conditions for completely localizing a formation of mobile robots/vehicles in SE(2) based on distributed sensor networks and graph rigidity are proposed. A method for estimating the quality of localizations via a linearized weighted least-squares algorithm is presented, which considers incomplete and noisy sensory information. The approach in this paper had been implemented in a multi-robot system of five car-like robots equipped with omni-directional cameras and IEEE 802.11b wireless network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号