首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Coherence plays a very important role in linear systems analysis, since, in addition to quantify the similarity between signals, it is related to other quantities of interest, such as the signal-to-noise ratio (SNR). The sampling distribution of coherence estimates between Gaussian signals is well established, and hence, in this particular case, the statistics of SNR can be readily found if it is calculated from coherence estimates. However, in some applications, one of the signals is periodic, leading to a different coherence sampling distribution, which has been recently investigated. This work aims at developing analytical expressions for bias, variance and the probability density function of coherence-based SNR estimates under this particular assumption. Routines for obtaining this latter as well as critical values of the estimates are also provided.  相似文献   

2.
Gyro simulation is an important process of inertial navigation theory research, with the major difficulty being the stochastic error modeling. One commonly used stochastic model for a fiber optic gyro (FOG) is a Gaussian white (GW) noise plus a first order Markov process. The model parameters are usually obtained by using time series analysis methods or the Allan variance method through FOG static experiment. However, in a real life situation, a FOG may not be used. In this paper, a simulation method is proposed for estimating the stochastic errors of FOG. When using this method, the model parameters are set based on performance indicators, which are chosen as the angle random walk (ARW) and bias stability. During the research, the ARW and bias stability indicators of the GW noise and the first order Markov process are analyzed separately. Their analytical expressions are derived to reveal the relation between the model parameters and performance indicators. In order to verify the theory, a large number of simulations were carried out. The results show that the statistical performance indicators of the simulated signals are consistent with the theory. Furthermore, a simulation of a VG951 FOG is designed in this research. The Allan variance curve of the simulated signal is in agreement with the real one.  相似文献   

3.
Functional correlation between oscillatory neural and muscular signals during tremor can be revealed by coherence estimation. The coherence value in a defined frequency range reveals the interaction strength between the two signals. However, coherence estimation does not provide directional information, preventing the further dissection of the relationship between the two interacting signals. We have therefore investigated causal correlations between the subthalamic nucleus (STN) and muscle in Parkinsonian tremor using adaptive Granger autoregressive (AR) modeling. During resting tremor we analyzed the inter-dependence of local field potentials (LFPs) recorded from the STN and surface electromyograms (EMGs) recorded from the contralateral forearm muscles using an adaptive Granger causality based on AR modeling with a running window to reveal the time-dependent causal influences between the LFP and EMG signals in comparison with coherence estimation. Our results showed that during persistent tremor, there was a directional causality predominantly from EMGs to LFPs corresponding to the significant coherence between LFPs and EMGs at the tremor frequency; and over episodes of transient resting tremor, the inter-dependence between EMGs and LFPs was bi-directional and alternatively varied with time. Further time–frequency analysis showed a significant suppression in the beta band (10–30 Hz) power of the STN LFPs preceded the onset of resting tremor which was presented as the increases in the power at the tremor frequency (3.0–4.5 Hz) in both STN LFPs and surface EMGs. We conclude that the functional correlation between the STN and muscle is dynamic, bi-directional, and dependent on the tremor status. The Granger causality and time–frequency analysis are effective to characterize the dynamic correlation of the transient or intermittent events between simultaneously recorded neural and muscular signals at the same and across different frequencies.  相似文献   

4.
The performance of a Wideband Code Division Multiple Access (W-CDMA) system using Pseudo Noise (PN) codes and chaotic codes in the presence of a Weibull fading channel is studied in this paper. The W-CDMA system modeled using Gaussian Approximation is analyzed on a Weibull fading channel which fades the amplitude of the transmitted signal randomly according to the Weibull distribution. Closed-form expressions for Bit Error Rate (BER) are derived and expressed in terms of Meijer?s-G function. Performance measures in terms of BER are plotted versus Signal-to-Noise Ratio (SNR) for various values of fading severity, average fading power, and channel memory using PN and chaotic codes. Performance comparison between PN codes and chaotic codes are also analyzed.  相似文献   

5.
In this paper we are concerned with the problems of (1) tracking or estimating the unknown, time-varying instantaneous frequency (IF) of a chirp signal from a multi-component signal (we assume our multi-component signal to be formed of additive chirp signals, disjoint in the time–frequency domain, and Gaussian noise) and (2) reconstructing a specific chirp signal based on the estimate of its IF found at (1). The algorithm we developed is based on a previously proposed method adapted now for the case of multi-component signals. It combines an adaptive smoothing procedure with a noise resistant Fourier filter to generate an algorithm with an extremely fine frequency resolution. The method is non-parametric, that is, we assume no prior knowledge about the form of the time-varying IF of the chirp or about the chirp itself. We demonstrate how the method works on simulated data and compare its performance to other presently used procedures.  相似文献   

6.
The magnitude-squared coherence (MSC) is a measure that estimates the extent to which one real- or complex-valued signal can be predicted from another real- or complex-valued signal using a linear model. It is also used as a measure of the similarities in the frequency content of two signals. The measure is widely used in signal analysis, especially in fields such as biomedical, where a large number of signals must be processed simultaneously. It is natural to wish to generalize this idea to compare two vector-valued signals, and a variety of approaches have been proposed. This paper reviews these generalizations, presents new relationships among the measures, and demonstrates a series of results that show the similarities and dissimilarities among these measures. Some of the measures have a clear link with total interdependence; some are related to the mutual information rate. Basic results such as the various Sandwich Theorems show how the measures relate, and understanding these properties is key to an informed use of any vector generalization of MSC.  相似文献   

7.
复值型数据Improper线性回归模型的估计(英文)   总被引:1,自引:0,他引:1  
复随机变量称为"improper"随机变量,若它的"伪"协方差阵不为0,否则称为"proper"随机变量.研究了误差服从独立同分布的improper复高斯分布的线性回归模型.利用极大似然方法和2阶段最小二乘方法来估计回归系数.模拟表明,这2种方法与经典复版本的最小二乘法有不同之处,并将该方法用于实际风信号数据的处理.  相似文献   

8.
分析了基于二阶统计的CSPRIT算法在空间相关高斯噪声环境中存在的问题,提出将四阶累积量与CSPRIT算法相结合,处理一维二元相移键控信号(BPSK)和多元幅移键控信号(MASK),实现信号到达角(DOA)的估计和波束形成器的构造。与基于二阶统计的CSPRIT算法相比,基于四阶累积量的改进算法能够有效抑制空间相关的高斯噪声,提高信号估计精度。计算机仿真验证了该算法的有效性。  相似文献   

9.
This paper examines a real-time measure of bias in Web search engines. The measure captures the degree to which the distribution of URLs, retrieved in response to a query, deviates from an ideal or fair distribution for that query. This ideal is approximated by the distribution produced by a collection of search engines. Differences between bias and classical retrieval measures are highlighted by examining the possibilities for bias in four extreme cases of recall and precision. The results of experiments examining the influence on bias measurement of subject domains, search engines, and search terms are presented. Three general conclusions are drawn: (1) the performance of search engines can be distinguished with the aid of the bias measure; (2) bias values depend on the subject matter under consideration; (3) choice of search terms does not account for much of the variance in bias values. These conclusions underscore the need to develop “bias profiles” for search engines.  相似文献   

10.
Several statistical sampling methods are evaluated for estimating the total number of relevant documents in a collection for a given query. The total number of relevant documents is needed in order to compute recall values for use in evaluating document retrieval systems. The simplest method considered uses simple random sampling to estimate the number of relevant documents. Another type of random sampling, which assigns unequal selection probabilities to the individual documents in the collection, is also investigated. An alternative approach considered uses curve fitting and extrapolation, where a smooth curve is developed which relates precision to document rank. Another curve relates a function of precision to the query-document score. In either case, the curve is extrapolated to the total number of documents in order to estimate the number of relevant documents. Empirical comparisons are made of all three methods.  相似文献   

11.
《Research Policy》2019,48(9):103821
We investigate the influence of public R&D subsidies on a firm’s likelihood to form technological collaborations. Using signaling theory, we conceptualize the award of a subsidy as a pointing signal (i.e., indicating a quality attribute that distinguishes the signaler from its competitors), and the monetary amount raised through a subsidy as an activating signal (i.e., activating the quality attribute of the signaler). Drawing on the attention-based view, we investigate whether the relative salience of these signals varies between two types of signal receivers: academic and corporate partners. Using a panel sample of Spanish manufacturing firms, our results indicate that the two types of receivers attend to the two signals differently: while academic partners attend to pointing signals only (sent by the award of a selective subsidy), corporate partners react to the richer information that activating signals provide (sent by the monetary value of both selective and automatic subsidies). Our results are stronger for SMEs vis-à-vis large firms, and hold after controlling for endogeneity, selection bias, simultaneity, attrition, inter-temporal patterns in technological collaborations, and the substantive effects of subsidies. The theorized and tested dual nature of subsidy-enabled signals and their different salience to distinct partner types hold interesting implications for research on alliances, innovation policy, and signals.  相似文献   

12.
In information retrieval (IR), the improvement of the effectiveness often sacrifices the stability of an IR system. To evaluate the stability, many risk-sensitive metrics have been proposed. Since the theoretical limitations, the current works study the effectiveness and stability separately, and have not explored the effectiveness–stability tradeoff. In this paper, we propose a Bias–Variance Tradeoff Evaluation (BV-Test) framework, based on the bias–variance decomposition of the mean squared error, to measure the overall performance (considering both effectiveness and stability) and the tradeoff between effectiveness and stability of a system. In this framework, we define generalized bias–variance metrics, based on the Cranfield-style experiment set-up where the document collection is fixed (across topics) or the set-up where document collection is a sample (per-topic). Compared with risk-sensitive evaluation methods, our work not only measures the effectiveness–stability tradeoff of a system, but also effectively tracks the source of system instability. Experiments on TREC Ad-hoc track (1993–1999) and Web track (2010–2014) show a clear effectiveness–stability tradeoff across topics and per-topic, and topic grouping and max–min normalization can effectively reduce the bias–variance tradeoff. Experimental results on TREC Session track (2010–2012) also show that the query reformulation and increase of user data are beneficial to both effectiveness and stability simultaneously.  相似文献   

13.
2000 vials of lyophilized QC of two different levels (low and high) were donated by Roche Diagnostics GmbH, through the IFCC and received by CMCH in June 2001. A total of 240 la boratories were enrolled for this 6 month pilot study. In addition to the 12 analytes in the liquid QC programme, six additional analytes, LDH, triglyceride, urate, total bilirubin, phosphate and amylase were included. It was also possible to measure sodium and potassium by ion selective electrode (ISE) methods in the QC for the first time. The performance of the laboratories for the existing 12 analytes using liquid stabilized QC was compared to the performance using lyophilized QC. Using a statistical comparison of the methodwise mean variance index score (MVIS) values, five assays viz glucose, albumin, cholesterol, and SGOT and SGPT performance was the same in liquid QC and lyophilized QC. Three assays viz urea, calcium and creatinine were significantly better, and 4 assays total protein, sodium, potassium and ALP were significantly worse. However the overall VIS (OMVIS) for the laboratories was the same and the ranking pattern of this 6 month OMVIS was also unaltered. The lyophilized QC scheme highlighted a negative bias between flame and ISE methods for sodium and potassium, and a definite standardization problem in reporting LDH and amylase results, but triglyceride, urate and total bilirubin assays were performing well. It was concluded that the introduction of lyophilized QCs will not cause any deterioration of performance to participating laboratories. Stability of the material seems to be good and the laboratories are generally using a good reconstitution technique.  相似文献   

14.
The norm of practice in estimating graph properties is to use uniform random node (RN) samples whenever possible. Many graphs are large and scale-free, inducing large degree variance and estimator variance. This paper shows that random edge (RE) sampling and the corresponding harmonic mean estimator for average degree can reduce the estimation variance significantly. First, we demonstrate that the degree variance, and consequently the variance of the RN estimator, can grow almost linearly with data size for typical scale-free graphs. Then we prove that the RE estimator has a variance bounded from above. Therefore, the variance ratio between RN and RE samplings can be very large for big data. The analytical result is supported by both simulation studies and 18 real networks. We observe that the variance reduction ratio can be more than a hundred for some real networks such as Twitter. Furthermore, we show that random walk (RW) sampling is always worse than RE sampling, and it can reduce the variance of RN method only when its performance is close to that of RE sampling.  相似文献   

15.
Relationships between risetime, settling time, bandwidth and Q are presented for the class of TEM-mode delay line filters whose impulsive response is a train of uniformly spaced, but unequally weighted impulses. Conventional measures of bandwidth for these systems are undefined because the energy spectrum is a periodic function of frequency. It is shown, however, that with careful interpretation one may derive a useful inverse relationship between bandwidth and risetime via a suitably modified autocorrelation function; the relationship is a risetime-bandwidth product equaling a constant derived by an equivalent area measure. It is also shown that a useful estimate of the settling time of a system can be obtained in terms of the moments of the impulse response which weight heavily the contributions of the remote time domain residues. Examples are presented to demonstrate the application of the theory.  相似文献   

16.
Aiming at early detection of faults in dynamic systems subject to external periodic disturbances, this paper proposes a new generalized proportional-integral observer (GPIO) fault detection scheme with zero-pole joint optimization and novel complex coefficient gain (CCG) of residual evaluation. The focus of the proposed scheme is to reduce the adverse impacts caused by the semi-stationary periodic disturbance whose spectrum is uneven, with most energy being at some dominant frequencies. The proposed GPIO with a complex coefficient gain is designed in a two-stage procedure. In the first stage of zero assignment and pole optimization, the additional zeros introduced by the GPIO’s integration action are allocated to near the disturbance frequency. The gain of the transfer function matrix relating from the disturbances to the fault indicator signals is minimized by pole optimization. In the second stage of designing complex coefficient gain in residual evaluation, the unique feature of rank-deficient caused by the additional zeros assigned in stage one is further exploited to cancel the disturbances in the fault indicator signals (which is also referred to as the fault detection residual in this article). It is proved that, for an arbitrary periodic disturbance with a specific spectrum, the remnant components of the disturbance in the indicator signals generated by the GPIO can cancel each other by a complex gain vector, which can be determined by the zero eigenvalue’s left eigenvector of the rank-deficient of the disturbance transfer function matrix. The sufficient conditions for the convergence of the proposed fault detection filter are also given. Numerical examples illustrate the proposed method’s better performance in detecting minor faults.  相似文献   

17.
用于估计马斯京根模型参数的方法很多,但这些方法在数据存在异常值时缺乏抵御异常值影响的抗差性能. 推导出一种有限制条件的参数抗差估计算法,通过含有随机误差和异常误差的人工数据和真实数据比较抗差算法与传统最小二乘算法的抗差性. 研究表明抗差估计算法能减小异常值对参数估值的影响.  相似文献   

18.
基于Domingos的期望预测误差分解框架,在3个数据集上,对MCLP、LDA和C5.0这3种算法的偏差-方差结构特点进行了比较分析. 实验结果表明,一般来说,C5.0呈现低偏差-高方差的特点,LDA与之相反,而MCLP则介于两者之间,比较接近LDA. 当训练集样本量较小时,MCLP的偏差和方差都相对较高,而随着训练集的增大,MCLP的偏差和方差明显减小,甚至低于其他两者.  相似文献   

19.
In wireless communications, the channel is typically modeled as a random, linear, time-varying system that spreads the transmitted signal in both time and frequency due to multi-path propagation and Doppler effects. Estimated channel parameters allow system designers to develop coherent receivers that increase the system performance. In this paper, we show how time-frequency analysis can be used to model and estimate the time-varying channel of a multi-carrier spread spectrum (MCSS) system using a complex quadratic sequence as the spreading code. We will show that for this spreading code, the effects of time delays and Doppler frequency shifts, caused by the mobility of environment objects, can be combined and represented effectively as time shifts. The discrete evolutionary transform (DET), as a time-frequency analysis method, enables us to estimate the effective time shifts via a spreading function and to use them to equalize the channel. Using the effective time shifts, the time-varying channel can be represented simply as linear-time invariant system by embedding the Doppler shifts that characterize the time-varying channel into effective time shifts. The channel parameters are used to estimate the data bit sent. To illustrate the performance of the proposed method we perform several simulations with different levels of channel noise, jammer interference, and Doppler frequency shifts.  相似文献   

20.
In almost all the work carried out in the area of automatic modulation classification (AMC), the dictionary of all possible modulations that can occur is assumed to be fixed and given. In this paper, we consider the problem of discovering the unknown digital amplitude-phase modulations when the dictionary is not given. A deconvolution based framework is proposed to estimate the distribution of the transmitted symbols, which completely characterizes the underlying signal constellation. The method involves computation of the empirical characteristic function (ECF) from the received signal samples, and employing constrained least squares (CLS) filtering in the frequency domain to reveal the unknown symbol set. The decoding of the received signals can then be carried out based on the estimate of the signal constellation. The proposed method can be implemented efficiently using fast Fourier transform (FFT) algorithms. In addition, we show that the distribution estimate of the transmitted symbols can be refined if the signal constellation is known to satisfy certain symmetry and independence properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号