首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
For the linear statistical model y = Xb + e, X of full column rank estimates of b of the form (C + X′X)+X′y are studied, where C commutes with X′X and Q+ is the Moore-Penrose inverse of Q. Such estimators may have smaller mean square error, component by component than does the least squares estimator. It is shown that this class of estimators is equivalent to two apparently different classes considered by other authors. It is also shown that there is no C such that (C + XX)+XY = My, in which My has the smallest mean square error, component by component. Two criteria, other than tmse, are suggested for selecting C. Each leads to an estimator independent of the unknown b and σ2. Subsequently, comparisons are made between estimators in which the C matrices are functions of a parameter k. Finally, it is shown for the no intercept model that standardizing, using a biased estimate for the transformed parameter vector, and retransforming to the original units yields an estimator with larger tmse than the least squares estimator.  相似文献   

2.
The norm of practice in estimating graph properties is to use uniform random node (RN) samples whenever possible. Many graphs are large and scale-free, inducing large degree variance and estimator variance. This paper shows that random edge (RE) sampling and the corresponding harmonic mean estimator for average degree can reduce the estimation variance significantly. First, we demonstrate that the degree variance, and consequently the variance of the RN estimator, can grow almost linearly with data size for typical scale-free graphs. Then we prove that the RE estimator has a variance bounded from above. Therefore, the variance ratio between RN and RE samplings can be very large for big data. The analytical result is supported by both simulation studies and 18 real networks. We observe that the variance reduction ratio can be more than a hundred for some real networks such as Twitter. Furthermore, we show that random walk (RW) sampling is always worse than RE sampling, and it can reduce the variance of RN method only when its performance is close to that of RE sampling.  相似文献   

3.
The computational complexity of the numerical simulation of fractional chaotic system and its synchronization control is O(N2) compared with O(N) for integer chaotic system, where N is step number and O is the computational complexity. In this paper, we propose optimizing methods to solve fractional chaotic systems, including equal-weight memory principle, improved equal-weight memory principle, chaotic combination and fractional chaotic precomputing operator. Numerical examples show that the combination of these algorithms can simulate fractional chaotic system and synchronize the fractional master and slave systems accurately. The presented algorithms for simulation and synchronization of fractional chaotic system are up to 1.82 and 1.75 times faster than the original implementation respectively.  相似文献   

4.
Using a mapping between a Rouse dumbbell model and fine-grained Monte Carlo simulations, we have computed the relaxation time of λ-DNA in a high ionic strength buffer confined in a nanochannel. The relaxation time thus obtained agrees quantitatively with experimental data [Reisner et al., Phys. Rev. Lett. 94, 196101 (2005)] using only a single O(1) fitting parameter to account for the uncertainty in model parameters. In addition to validating our mapping, this agreement supports our previous estimates of the friction coefficient of DNA confined in a nanochannel [Tree et al., Phys. Rev. Lett. 108, 228105 (2012)], which have been difficult to validate due to the lack of direct experimental data. Furthermore, the model calculation shows that as the channel size passes below approximately 100 nm (or roughly the Kuhn length of DNA) there is a dramatic drop in the relaxation time. Inasmuch as the chain friction rises with decreasing channel size, the reduction in the relaxation time can be solely attributed to the sharp decline in the fluctuations of the chain extension. Practically, the low variance in the observed DNA extension in such small channels has important implications for genome mapping.  相似文献   

5.
In this work, we probes the stability results of H state estimation for discrete-time stochastic genetic regulatory networks with leakage, distributed delays, Markovian jumping parameters and impulsive effects. Here, we focus to evaluate the true absorption of mRNAs and proteins by calculating the H estimator in such a way that the estimation error dynamics is stochastically stable during the completion of the prescribed H disturbance attenuation level. In favor of decreasing the data communion in trouble, the H system accept and evaluate the outputs that are only transferred to the estimator when a certain case is acroses. Further, few sufficient conditions are formulated, by utilizing the Lyapunov–Krasovskii functional under which the estimation error system is stochastically stable and also satisfied the H attainment constraint. The estimator is obtained in terms of linear matrix inequalities (LMIs) and these LMIs are attainable, only if the estimator gains can be absolutely given. In addition to that, two numerical examples are exposed to establish the efficiency of our obtained results.  相似文献   

6.
Croplands are the single largest anthropogenic source of nitrous oxide (N2O) globally, yet their estimates remain difficult to verify when using Tier 1 and 3 methods of the Intergovernmental Panel on Climate Change (IPCC). Here, we re-evaluate global cropland-N2O emissions in 1961–2014, using N-rate-dependent emission factors (EFs) upscaled from 1206 field observations in 180 global distributed sites and high-resolution N inputs disaggregated from sub-national surveys covering 15593 administrative units. Our results confirm IPCC Tier 1 default EFs for upland crops in 1990–2014, but give a ∼15% lower EF in 1961–1989 and a ∼67% larger EF for paddy rice over the full period. Associated emissions (0.82 ± 0.34 Tg N yr–1) are probably one-quarter lower than IPCC Tier 1 global inventories but close to Tier 3 estimates. The use of survey-based gridded N-input data contributes 58% of this emission reduction, the rest being explained by the use of observation-based non-linear EFs. We conclude that upscaling N2O emissions from site-level observations to global croplands provides a new benchmark for constraining IPCC Tier 1 and 3 methods. The detailed spatial distribution of emission data is expected to inform advancement towards more realistic and effective mitigation pathways.  相似文献   

7.
Traditionally, recommender systems for the web deal with applications that have two dimensions, users and items. Based on access data that relate these dimensions, a recommendation model can be built and used to identify a set of N items that will be of interest to a certain user. In this paper we propose a multidimensional approach, called DaVI (Dimensions as Virtual Items), that consists in inserting contextual and background information as new user–item pairs. The main advantage of this approach is that it can be applied in combination with several existing two-dimensional recommendation algorithms. To evaluate its effectiveness, we used the DaVI approach with two different top-N recommender algorithms, Item-based Collaborative Filtering and Association Rules based, and ran an extensive set of experiments in three different real world data sets. In addition, we have also compared our approach to the previously introduced combined reduction and weight post-filtering approaches. The empirical results strongly indicate that our approach enables the application of existing two-dimensional recommendation algorithms in multidimensional data, exploiting the useful information of these data to improve the predictive ability of top-N recommender systems.  相似文献   

8.
Moving object detection is one of the most challenging tasks in computer vision and many other fields, which is the basis for high-level processing. Low-rank and sparse decomposition (LRSD) is widely used in moving object detection. The existing methods primarily address the LRSD problem by exploiting the approximation of rank functions and sparse constraints. Conventional methods usually consider the nuclear norm as the approximation of the low-rank matrix. However, the actual results show that the nuclear norm is not the best approximation of the rank function since it simultaneously minimize all the singular values. In this paper, we exploit a novel nonconvex surrogate function to approximate the low-rank matrix and propose a generalized formulation for nonconvex low-rank and sparse decomposition based on the generalized singular value thresholding (GSVT) operator. And then, we solve the proposed nonconvex problem via the alternating direction method of multipliers (ADMM), and also analyze its convergence. Finally, we give numerical results to validate the proposed algorithm on both synthetic data and real-life image data. The results demonstrate that our model has superior performance. And we use the proposed nonconvex model for moving objects detection, and provide the experimental results. The results show that the proposed method is more effective than representative LRSD based moving objects detection algorithms.  相似文献   

9.
This paper provides the closed form analytical solution to the problem of minimizing the material volume required to support a given set of bending loads with a given number of discrete structural members, subject to material yield constraints. The solution is expressed in terms of two variables, the aspect ratio, ρ-1, and complexity of the structure, q (the total number of members of the structure is equal to q(q+1)). The minimal material volume (normalized) is also given in closed form by a simple function of ρ and q, namely, V=q(ρ-1/q-ρ1/q). The forces for this nonlinear problem are shown to satisfy a linear recursive equation, from node-to-node of the structure. All member lengths are specified by a linear recursive equation, dependent only on the initial conditions involving a user specified length of the structure. The final optimal design is a class 2 tensegrity structure. Our results generate the 1904 results of Michell in the special case when the selected complexity q approaches infinity. Providing the optimum in terms of a given complexity has the obvious advantage of relating complexity q to other criteria, such as costs, fabrication issues, and control. If the structure is manufactured with perfect joints (no glue, welding material, etc.), the minimal mass complexity is infinite. But in the presence of any joint mass, the optimal structural complexity is finite, and indeed quite small. Hence, only simple structures (low complexity q) are needed for practical design.  相似文献   

10.
In this paper, we focus on the false data injection attacks (FDIAs) on state estimation and corresponding countermeasures for data recovery in smart grid. Without the information about the topology and parameters of systems, two data-driven attacks (DDAs) with noisy measurements are constructed, which can escape the detection from the residue-based bad data detection (BDD) in state estimator. Moreover, in view of the limited energy of adversaries, the feasibility of proposed DDAs is improved, such as more sparse and low-cost DDAs than existing work. In addition, a new algorithm for measurement data recovery is introduced, which converts the data recovery problem against the DDAs into the problem of the low rank approximation with corrupted and noisy measurements. Especially, the online low rank approximate algorithm is employed to improve the real-time performance. Finally, the information on the 14-bus power system is employed to complete the simulation experiments. The results show that the constructed DDAs are stealthy under BBD but can be eliminated by the proposed data recovery algorithms, which improve the resilience of the state estimator against the attacks.  相似文献   

11.
In this paper, we consider leader–follower decentralized optimal control for a hexarotor group with one leader and large population followers. Our hexarotor is modeled based on the quaternion framework to resolve singularity of the rotation matrix represented by Euler angles, and has 6-DoF due to six tilted propellers, which allows to control its translation and attitude simultaneously. In our problem setup, the leader hexarotor is coupled with the follower hexarotors through the followers’ average behavior (mean field), and the followers are coupled with each other through their average behavior and the leader’s arbitrary control. By using the mean field Stackelberg game framework, we obtain a set of decentralized optimal controls for the leader and N follower hexarotors when N is arbitrarily large, where each control is a function of its local information. We show that the corresponding decentralized optimal controls constitute an ϵ-Stackelberg equilibrium for the leader and N followers, where ϵ → 0 as N → ∞. Through simulations with two different operating scenarios, we show that the leader–follower hexarotors follow their desired position and attitude references, and the followers are controlled by the leader while effectively tracking their approximated average behavior. Furthermore, we show the nonsingularity and 6-DoF control performance of the leader–follower hexarotor group due to the novel modeling technique of the hexarotor presented in the paper.  相似文献   

12.
The estimation of query model is an important task in language modeling (LM) approaches to information retrieval (IR). The ideal estimation is expected to be not only effective in terms of high mean retrieval performance over all queries, but also stable in terms of low variance of retrieval performance across different queries. In practice, however, improving effectiveness can sacrifice stability, and vice versa. In this paper, we propose to study this tradeoff from a new perspective, i.e., the bias–variance tradeoff, which is a fundamental theory in statistics. We formulate the notion of bias–variance regarding retrieval performance and estimation quality of query models. We then investigate several estimated query models, by analyzing when and why the bias–variance tradeoff will occur, and how the bias and variance can be reduced simultaneously. A series of experiments on four TREC collections have been conducted to systematically evaluate our bias–variance analysis. Our approach and results will potentially form an analysis framework and a novel evaluation strategy for query language modeling.  相似文献   

13.
We demonstrated a simple method for the device design of a staggered herringbone micromixer (SHM) using numerical simulation. By correlating the simulated concentrations with channel length, we obtained a series of concentration versus channel length profiles, and used mixing completion length Lm as the only parameter to evaluate the performance of device structure on mixing. Fluorescence quenching experiments were subsequently conducted to verify the optimized SHM structure for a specific application. Good agreement was found between the optimization and the experimental data. Since Lm is straightforward, easily defined and calculated parameter for characterization of mixing performance, this method for designing micromixers is simple and effective for practical applications.  相似文献   

14.
Medical crowdfunding helps low-income patients raise money for medical treatment and has grown tremendously in recent years. The most appropriate messaging strategy for writing charitable appeals to attract donations remains unclear. This study fills this gap by drawing on Aristotle's three modes of persuasion to explore factors affecting willingness to donate to medical crowdfunding projects from three aspects: logos, pathos, and ethos. This study adopted a multi-method approach by conducting two laboratory experiments (N = 125 and N = 123) and a field study (N = 1645). Analysis of variance (ANOVA) in Study 1 showed that high information quality (F = 9.774, p = 0.002) and gain frame (F = 8.620, p = 0.004) have positive effects on the trustworthiness of the project initiator (ethos), which in turn promoting potential donors’ willingness to donate (β = 0.339, p = 0.001). Study 2 confirmed the findings about information quality in Study 1, and further show that there was no significant difference between gain-first and gain-last frame on trustworthiness and willingness to donate (p > 0.05). Then, information quality is further detailed into three sub-dimensions in Study 3: text length, number of images, and number of health-related words. The results of ordinary least squares (OLS) regression with robust standard error indicate that the text length (β = 0.350, p < 0.001) and number of images (β = 0.048, p < 0.001) positively influence donation behavior, but the opposite conclusion yields health-related words (β = -0.027, p < 0.01). This study provides theoretical insights into the role of medical crowdfunding charitable appeals by verifying the persuasion effects of rational, emotional, and credibility appeals. This study also contributes to persuasion theory by highlighting the role of emotional appeals and identifying the mediating impact of credibility appeals in the context of medical crowdfunding. This study also has important practical implications by guiding funders to write persuasive charity appeals that will attract the attention of potential donors.  相似文献   

15.
From a number of ML estimators (typically unbiased) of practical interest which include the variance for a Gaussian distribution, the standard deviation for a Laplace distribution, the variance for a Rayleigh distribution and a “spread parameter” for a Cauchy distribution, we design robust estimators according to an emphasis balance between normalized performance and normalized robustness. We measure performance with inverted MSE and robustness with a differential geometric approach.  相似文献   

16.
In a multimodal, system, the growth in the number of possible modal paths makes state estimation difficult. Practical algorithms bound complexity by merging estimates that are conditioned on different modal path fragments. Commonly, the weight given to these local estimates is inversely related to the normalized magnitude of the residuals generated by each local filter. This paper presents a novel dual-sensor estimator that uses a merging formula that is based upon a different function of the residuals. Its performance is contrasted with an estimator using a single sensor and with another dual-sensor algorithm that requires fewer on-line calculations.  相似文献   

17.
The stochastic minimum-variance pseudo-unbiased reduced-rank estimator (stochastic MV-PURE estimator) has been developed to provide linear estimation with robustness against high noise levels, imperfections in model knowledge, and ill-conditioned systems. In this paper, we investigate the theoretical performance of the stochastic MV-PURE estimator under varying levels of additive noise. We prove that the mean-square-error (MSE) of this estimator in the low signal-to-noise (SNR) region is much smaller than that obtained with its full-rank version, the minimum-variance distortionless estimator, and the gap becomes larger as the noise level increases. These results shed light on the excellent performance of the stochastic MV-PURE estimator in highly noisy settings obtained in simulations so far. Furthermore, we extend previous numerical simulations to show how the insight gained from the results of this paper can be used in practice.  相似文献   

18.
This paper presents a relevance model to rank the facts of a data warehouse that are described in a set of documents retrieved with an information retrieval (IR) query. The model is based in language modeling and relevance modeling techniques. We estimate the relevance of the facts by the probability of finding their dimensions values and the query keywords in the documents that are relevant to the query. The model is the core of the so-called contextualized warehouse, which is a new kind of decision support system that combines structured data sources and document collections. The paper evaluates the relevance model with the Wall Street Journal (WSJ) TREC test subcollection and a self-constructed fact database.  相似文献   

19.
《Research Policy》2022,51(8):104170
Knowledge creation is widely considered as the central driver for innovation, and accordingly, for creating competitive advantage. However, most measurement approaches have so far mainly focused on the quantitative dimension of knowledge creation, neglecting that not all knowledge has the same value (Balland and Rigby, 2017). The notion of knowledge complexity has come into use in this context just recently as an attempt to measure the quality of knowledge in terms of its uniqueness and its replicability. The central underlying assumption is that more complex knowledge is more difficult to be replicated, and therefore provides a higher competitive advantage for firms, or at an aggregated level, regions and countries. The objective of this study is to advance and apply measures for regional knowledge complexity to a set of European regions, and to highlight its potential in a regional policy context. This is done by, first, characterising the spatial distribution of complex knowledge in Europe and its dynamics in recent years, second, establishing that knowledge complexity is associated with future regional economic growth, and third, illustrating the usefulness of the measures by means of some policy relevant example applications. We proxy the production of complex knowledge with a regional knowledge complexity index (KCI) that is based on regional patent data of European metropolitan regions from current EU and EFTA member countries. The results are promising as the regional KCI unveils knowledge creation patterns not observed by conventional measures. Moreover, regional complexity measures can be easily combined with relatedness metrics to support policy makers in a smart specialisation context.  相似文献   

20.
This paper aims at rejuvenating the two rank correlation coefficients, Spearman’s footrule (SF) and Gini’s gamma (GG), which were forgotten in the literature for a long time due to lack of knowledge concerning their statistical properties. Under the common bivariate normal model, we establish the asymptotic analytical expressions of the mean and variance of SF and GG, and investigate the performances of SF and GG from the aspects of biased effect, approximate variance and asymptotic relative efficiency (ARE). Moreover, we further study the robustness of SF and GG under contaminated normal models. In order to get a deeper understanding of their performances, we also compare SF and GG with Kendall’s tau (KT) and Spearman’s rho (SR), the most widely used rank correlation coefficients, in terms of bias and mean square error (MSE) under both the normal and contaminated normal models. Finally we show an application of SF and GG in the field of signal processing through the example of time-delay estimation. Simulation results indicate that SF and GG outperform SR and KT in some cases. The new findings discovered in this paper enable SF and GG to play complementary roles to KT and SR in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号