首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
IntroductionMoving average (MA) is one possible way to use patient results for analytical quality control in medical laboratories. The aims of this study were to: (1) implement previously optimized MA procedures for 10 clinical chemistry analytes into the laboratory information system (LIS); (2) monitor their performance as a real-time quality control tool, and (3) define an algorithm for MA alarm management in a small-volume laboratory to suit the specific laboratory.Materials and methodsMoving average alarms were monitored and analysed over a period of 6 months on all patient results (total of 73,059) obtained for 10 clinical chemistry parameters. The optimal MA procedures were selected previously using an already described technique called the bias detection simulation method, considering the ability of bias detection the size of total allowable error as the key parameter for optimization.ResultsDuring 6 months, 17 MA alarms were registered, which is 0.023% of the total number of generated MA values. In 65% of cases, their cause was of pre-analytical origin, in 12% of analytical origin, and in 23% the cause was not found. The highest alarm rate was determined on sodium (0.10%), and the lowest on calcium and chloride.ConclusionsThis paper showed that even in a small-volume laboratory, previously optimized MA procedures could be successfully implemented in the LIS and used for continuous quality control. Review of patient results, re-analysis of samples from the stable period, analysis of internal quality control samples and assessment of the analyser malfunctions and maintenance log have been proposed for the algorithm for managing MA alarms.  相似文献   

2.
Breast cancer is one of the leading causes of death among women worldwide. Accurate and early detection of breast cancer can ensure long-term surviving for the patients. However, traditional classification algorithms usually aim only to maximize the classification accuracy, failing to take into consideration the misclassification costs between different categories. Furthermore, the costs associated with missing a cancer case (false negative) are clearly much higher than those of mislabeling a benign one (false positive). To overcome this drawback and further improving the classification accuracy of the breast cancer diagnosis, in this work, a novel breast cancer intelligent diagnosis approach has been proposed, which employed information gain directed simulated annealing genetic algorithm wrapper (IGSAGAW) for feature selection, in this process, we performs the ranking of features according to IG algorithm, and extracting the top m optimal feature utilized the cost sensitive support vector machine (CSSVM) learning algorithm. Our proposed feature selection approach which can not only help to reduce the complexity of SAGASW algorithm and effectively extracting the optimal feature subset to a certain extent, but it can also obtain the maximum classification accuracy and minimum misclassification cost. The efficacy of our proposed approach is tested on Wisconsin Original Breast Cancer (WBC) and Wisconsin Diagnostic Breast Cancer (WDBC) breast cancer data sets, and the results demonstrate that our proposed hybrid algorithm outperforms other comparison methods. The main objective of this study was to apply our research in real clinical diagnostic system and thereby assist clinical physicians in making correct and effective decisions in the future. Moreover our proposed method could also be applied to other illness diagnosis.  相似文献   

3.
Context Two Biosystems analysers are used in our laboratory, a fully automated A25 and a semi-automated BTS-350. Internal quality control is done for both but external quality control only for A25. As BTS-350 is used for backup, it is important that the results of both analysers are not just comparable but also within predefined limits of systematic, random and total error (TE). Aim To evaluate the imprecision, bias and TE of the two Biosystem analysers. Materials and Methods Biosystems level-1 quality control sera lot number 70A was run in duplicate for 32 days on both the analysers. Between day imprecision (measured by the coefficient of variation), bias and TE were calculated for ten analytes and were checked to see whether they are within the acceptable minimum limits, desirable limits and optimum limits of allowable error based on specifications on Westgard’s website updated in 2014. Results On both the analysers, all the analytes except alkaline phosphatase were within the acceptable minimum limits of TE and most analytes were within the desirable limits of TE. Only TG on A25 was within the optimum limit of TE. Conclusion The two Biosystem analysers performed comparably with errors within acceptable limits for most analytes. BTS-350 was found to be a suitable and ready backup analyser for A25.  相似文献   

4.
Traditionally, recommender systems for the web deal with applications that have two dimensions, users and items. Based on access data that relate these dimensions, a recommendation model can be built and used to identify a set of N items that will be of interest to a certain user. In this paper we propose a multidimensional approach, called DaVI (Dimensions as Virtual Items), that consists in inserting contextual and background information as new user–item pairs. The main advantage of this approach is that it can be applied in combination with several existing two-dimensional recommendation algorithms. To evaluate its effectiveness, we used the DaVI approach with two different top-N recommender algorithms, Item-based Collaborative Filtering and Association Rules based, and ran an extensive set of experiments in three different real world data sets. In addition, we have also compared our approach to the previously introduced combined reduction and weight post-filtering approaches. The empirical results strongly indicate that our approach enables the application of existing two-dimensional recommendation algorithms in multidimensional data, exploiting the useful information of these data to improve the predictive ability of top-N recommender systems.  相似文献   

5.
分布式水循环模型的参数优化算法比较及应用   总被引:1,自引:0,他引:1  
孙波扬  张永勇  门宝辉  张士锋 《资源科学》2013,35(11):2217-2223
分布式水文模型的优势在于还原水文过程的时空变异性,可以很好地模拟和反映各种水文要素和下垫面因素的时空分布不均匀性。由此也导致模型参数过多,在子流域过多的情况下,人工调节参数繁琐复杂,应用优化算法实现参数自动调节成为首选。本文选取石羊河流域九条岭站1988-2005年实测径流资料,分别应用SCE-UA算法、遗传算法(GA)和粒子群算法(PSO)对分布式水循环模型(时变增益模型)进行参数率定,对比3种算法的收敛速度、所需迭代次数和算法稳定性。结果表明:通过SCE-UA、GA和PSO的优化,模型水平衡系数都控制在0.0左右,而相关系数和效率系数分别能达到0.90和0.84以上,模拟精度较好。但粒子群算法的全局搜索能力和收敛速度优于SCE-UA和遗传算法,所需迭代次数最少,初值敏感性小,更适合时变增益模型的参数寻优,有很高的扩展性和改进潜力。  相似文献   

6.
IntroductionWe investigated the interference of haemolysis on ethanol testing carried out with the Synchron assay kit using an AU680 autoanalyser (Beckman Coulter, Brea, USA).Materials and methodsTwo tubes of plasma samples were collected from 20 volunteers. Mechanical haemolysis was performed in one tube, and no other intervention was performed in the other tube. After centrifugation, haemolysed and non-haemolysed samples were diluted to obtain samples with the desired free haemoglobin (Hb) values (0, 1, 2, 5, 10 g/L). A portion of these samples was then separated, and ethanol was added to the separated sample to obtain a concentration of 86.8 mmol/L ethanol. After that, these samples were diluted with ethanol-free samples with the same Hb concentration to obtain samples containing 43.4, 21.7, and 10.9 mmol/L. Each group was divided into 20 equal parts, and an ethanol test was carried out. The coefficient of variation (CV), bias, and total error (TE) values were calculated.ResultsThe TE values of haemolysis-free samples were approximately 2-5%, and the TE values of haemolysed samples were approximately 10-18%. The bias values of haemolysed samples ranged from nearly - 6.2 to - 15.7%.ConclusionsHaemolysis led to negative interference in all samples. However, based on the 25% allowable total error value specified for ethanol in the Clinical Laboratory Improvement Amendments (CLIA 88) criteria, the TE values did not exceed 25%. Consequently, ethanol concentration can be measured in samples containing free Hb up to 10 g/L.  相似文献   

7.
宋鹏  王国富 《大众科技》2013,(12):71-73
传统的基于最小方差原理的反演结果依赖于初始模型选择,易陷入局部极小,针对以上问题,文章利用完全非线性反演方法-粒子群反演算法,对核磁共振探测地下水的数据资料进行反演解释,该算法具有操作简单,并行处理,不要求被优化的目标函数具有可微、可导、连续等性质的优点。将基本粒子群算法与模拟退火算法结合,加入非线性约束优化条件,使其适用于核磁共振探测地下水数据资料的反演解释。试验结果表明,混合粒子群反演算法反演结果精度较高,收敛速度较快,验证了粒子群优化算法在核磁共振反演应用中的可行性。  相似文献   

8.
In this work a procedure for obtaining polytopic λ-contractive sets for Takagi–Sugeno fuzzy systems is presented, adapting well-known algorithms from literature on discrete-time linear difference inclusions (LDI) to multi-dimensional summations. As a complexity parameter increases, these sets tend to the maximal invariant set of the system when no information on the shape of the membership functions is available. λ-contractive sets are naturally associated to level sets of polyhedral Lyapunov functions proving a decay-rate of λ. The paper proves that the proposed algorithm obtains better results than a class of Lyapunov methods for the same complexity degree: if such a Lyapunov function exists, the proposed algorithm converges in a finite number of steps and proves a larger λ-contractive set.  相似文献   

9.
IntroductionThe accurate estimation of low-density lipoprotein cholesterol (LDL) is crucial for management of patients at risk of cardiovascular events due to dyslipidemia. The LDL is typically calculated using the Friedewald equation and/or direct homogeneous assays. However, both methods have their own limitations, so other equations have been proposed, including a new equation developed by Sampson. The aim of this study was to evaluate Sampson equation by comparing with the Friedewald and Martin-Hopkins equations, and with a direct LDL method.Materials and methodsResults of standard lipid profile (total cholesterol (CHOL), high-density lipoprotein cholesterol (HDL) and triglycerides (TG)) were obtained from two anonymized data sets collected at two laboratories, using assays from different manufacturers (Beckman Coulter and Roche Diagnostics). The second data set also included LDL results from a direct assay (Roche Diagnostics). Passing-Bablok and Bland-Altman analysis for method comparison was performed.ResultsA total of 64,345 and 37,783 results for CHOL, HDL and TG were used, including 3116 results from the direct LDL assay. The Sampson and Friedewald equations provided similar LDL results (difference ≤ 0.06 mmol/L, on average) at TG ≤ 2.0 mmol/L. At TG between 2.0 and 4.5 mmol/L, the Sampson-calculated LDL showed a constant bias (- 0.18 mmol/L) when compared with the Martin-Hopkins equation. Similarly, at TG between 4.5 and 9.0 mmol/L, the Sampson equation showed a negative bias when compared with the direct assay, which was proportional (- 16%) to the LDL concentration.ConclusionsThe Sampson equation may represent a cost-efficient alternative for calculating LDL in clinical laboratories.  相似文献   

10.
IntroductionThe current study aimed to assess the interference of in vitro haemolysis on complete blood count (CBC) using Abbott Alinity hq system, and to determine which haemolysis levels affect the reliability of sample results.Materials and methodsBlood samples obtained from 25 volunteers in K3-EDTA tubes were divided into four aliquots. The first aliquot was not subjected to any intervention. The second, third and fourth aliquots were passed through a fine needle 2, 4 and 6 times, respectively. Complete blood count was performed by multi-angle polarized scatter separation technology and haemolysis index (HI) was assessed from the plasma samples separated by centrifugation. Five groups were formed according to the HI values. The percentage biases between the results of non-haemolysed and haemolysed groups were compared with the desirable bias limits from The European Federation of Clinical Chemistry and Laboratory Medicine database and reference change values (RCVs).ResultsIn groups 1 to 4, the effects of haemolysis on CBC parameters were acceptable comparing to the analytical bias except for lymphocytes (7.26%-7.42%), MCH (2.59%), and MCHC (0.47%-2.81%). Results of group 5 (gross haemolysis) showed decreases in HCT(- 4.56%), RBC (- 4.07%) count and increase in lymphocyte (11.60%) count higher than the analytical performance specifications. Moreover, variations in MCH (4.65%) and MCHC (5.24%) were exceeding the RCVs.ConclusionsGross haemolysis (haemoglobin concentration > 10 g/L) is likely to produce unreliable CBC results on non-pathological samples. Further studies including pathological specimens are needed.  相似文献   

11.
Laplace transform technique has been considered as an efficient way in solving differential equations with integer-order. But for differential equations with non-integer order, the Laplace transform technique works effectively only for relatively simple equations, because of the difficulties of calculating inversion of Laplace transforms. Motivated by finding an easy way to numerically solve the complicated fractional-order differential equations, we investigate the validity of applying numerical inverse Laplace transform algorithms in fractional calculus. Three numerical inverse Laplace transform algorithms, named Invlap, Gavsteh and NILT, were tested using Laplace transforms of fractional-order equations. Based on the comparison between analytical results and numerical inverse Laplace transform algorithm results, the effectiveness and reliability of numerical inverse Laplace transform algorithms for fractional-order differential equations was confirmed.  相似文献   

12.
BackgroundMannheimia haemolytica is the primary bacterial pathogen in causing bovine respiratory disease with tremendous annual losses in the cattle industry. The leukotoxin from M. haemolytica is the predominant virulence factor. Several leukotoxin activity assays are available but not standardized regarding sample preparation and cell line. Furthermore, these assays suffer from a high standard error, a prolonged time consumption and often complex sample pretreatments, which is important from the bioprocess engineering point of view.ResultsWithin this study, an activity assay based on the continuous cell line BL3.1 combined with a commercial available adenosine triphosphate viability assay kit was established. The leukotoxin activity was found to be strongly dependent on the sample preparation. Furthermore, the interfering effect of lipopolysaccharides in the sample could be successfully suppressed by adding polymyxin B. We reached a maximum relative P95 value of 14%, which is more than seven times lower compared to current available assays as well as a time reduction up to 88%.ConclusionUltimately, the established leukotoxin activity assay is simple, fast and has a high reproducibility. Critical parameters regarding the sample preparation were characterized and optimized making complex sample purification superfluous.  相似文献   

13.
Abstract

This exploratory study investigates the encounters and everyday experiences with the Facebook algorithm of 18 informants in Yangon, Myanmar. It draws on domestication theory and research on algorithms to understand how users come to use and respond to Facebook. Findings showed that their particular perception of Facebook algorithm—Friends funnel information—informs their domestication process, wherein they add strangers as Friends to draw more information flows to their News Feeds.  相似文献   

14.
IntroductionThe aims of study were to assess: 1) performance specifications of Atellica 1500, 2) comparability of Atellica 1500 and Iris, 3) the accuracy of both analysers in their ability to detect bacteria.Materials and methodsCarryover, linearity, precision, reproducibility, and limit of blank (LoB) verification were evaluated for erythrocyte and leukocyte counts. ICSH 2014 protocol was used for estimation of carryover, CLSI EP15-A3 for precision, and CLSI EP17 for LoB verification. Comparison for quantitative parameters was evaluated by Bland-Altman plot and Passing-Bablok regression. Qualitative parameters were evaluated by Weighted kappa analysis. Sixty-five urine samples were randomly selected and sent for urine culture which was used as reference method to determine the accuracy of bacteria detection by analysers.ResultsAnalytical specifications of Atellica 1500 were successfully verified. Total of 393 samples were used for qualitative comparison, while 269 for sediment urinalysis. Bland-Altman analysis showed statistically significant proportional bias for erythrocytes and leukocytes. Passing-Bablok analysis for leukocytes pointed to significant constant and minor proportional difference, while it was not performed for erythrocytes due to significant data deviation from linearity. Kappa analysis resulted in the strongest agreements for pH, ketones, glucose concentrations and leukocytes, while the poorest agreement for bacteria. The sensitivity and specificity of bacteria detection were: 91 (59-100)% and 76 (66-87)% for Atellica 1500 and 46 (17-77)% and 96 (87-100)% for Iris.ConclusionThere are large differences between Atellica 1500 and Iris analysers, due to which they are not comparable and can not be used interchangeably. While there was no difference in specificity of bacteria detection, Iris analyser had greater sensitivity.  相似文献   

15.
基于遗传算法优化神经网络的参考作物蒸散量预测模型   总被引:3,自引:0,他引:3  
冯禹  王守光  崔宁博  赵璐 《资源科学》2014,36(12):2624-2630
为实现气象资料缺乏情况下参考作物蒸散量(ET0)的精确模拟,利用川中丘陵区3个气象站点1999-2013年的逐日气象资料作为输入量,以FAO-56 Penman-Monteith模型计算的ET0作为标准值,建立基于遗传算法优化神经网络的ET0模拟模型(GA-BPNN),并将其模拟结果同Hargreaves、Mc Cloud、Priestley-Taylor和Makkink等4种常用ET0计算模型的计算结果进行对比。结果表明:GA-BPNN模型能够很好地反映ET0同气象因素之间的非线性关系,模拟精度较高;当基于温度资料模拟ET0时,GA-BPNN模型模拟精度高于Hargreaves和Mc Cloud模型;当基于温度和辐射资料时,GA-BPNN模型模拟精度明显高于Priestley-Taylor和Makkink模型。因此GA-BPNN模型可以作为气象资料缺乏情况下川中丘陵区ET0模拟的推荐模型。  相似文献   

16.
Most of the existing large-scale high-dimensional streaming anomaly detection methods suffer from extremely high time and space complexity. Moreover, these models are very sensitive to parameters,make their generalization ability very low, can also be merely applied to very few specific application scenarios. This paper proposes a three-layer structure high-dimensional streaming anomaly detection model, which is called the double locality sensitive hashing Bloom filter, namely dLSHBF. We first build the former two layers that is double locality sensitive hashing (dLSH), proving that the dLSH method reduces the hash coding length of the data, and it ensures that the projected data still has a favorable mapping distance-preserving property after projection. Second, we use a Bloom filter to build the third layer of dLSHBF model, which used to improve the efficiency of anomaly detection. Six large-scale high-dimensional data stream datasets in different IIoT anomaly detection domains were selected for comparison experiments. First, extensive experiments show that the distance-preserving performance of the former dLSH algorithm proposed in this paper is significantly better than the existing LSH algorithms. Second, we verify the dLSHBF model more efficient than the other existing advanced Bloom filter model (for example Robust Bloom Filter, Fly Bloom Filter, Sandwich Learned Bloom Filter, Adaptive Learned Bloom Filters). Compared with the state of the art, dLSHBF can perform with the detection rate (DR) and false alarm rate (FAR) of anomaly detection more than 97%, and less than 2.2% respectively. Its effectiveness and generalization ability outperform other existing streaming anomaly detection methods.  相似文献   

17.
Information filtering (IF) systems usually filter data items by correlating a set of terms representing the user’s interest (a user profile) with similar sets of terms representing the data items. Many techniques can be employed for constructing user profiles automatically, but they usually yield large sets of term. Various dimensionality-reduction techniques can be applied in order to reduce the number of terms in a user profile. We describe a new terms selection technique including a dimensionality-reduction mechanism which is based on the analysis of a trained artificial neural network (ANN) model. Its novel feature is the identification of an optimal set of terms that can classify correctly data items that are relevant to a user. The proposed technique was compared with the classical Rocchio algorithm. We found that when using all the distinct terms in the training set to train an ANN, the Rocchio algorithm outperforms the ANN based filtering system, but after applying the new dimensionality-reduction technique, leaving only an optimal set of terms, the improved ANN technique outperformed both the original ANN and the Rocchio algorithm.  相似文献   

18.
决策树分类算法是数据挖掘中一个重要的内容,而ID3算法又是决策树分类算法中的一种重要方法且被广泛应用。然而在实际应用过程中,现存的决策树算法也存在着很多不足之处,如计算效率低下、多值偏向等。为了解决这些问题,提出了一种基于ID3算法的加权简化信息熵算法,它提高了决策树的构建速度,减少了算法的计算运行时间,同时也克服了ID3算法往往偏向于选择取值较多的属性作为测试属性的缺陷。并且随着数据规模的增大,决策树的分类性能表现得越好。  相似文献   

19.
BackgroundMolecular mechanisms of plant–pathogen interactions have been studied thoroughly but much about them is still unknown. A better understanding of these mechanisms and the detection of new resistance genes can improve crop production and food supply. Extracting this knowledge from available genomic data is a challenging task.ResultsHere, we evaluate the usefulness of clustering, data-mining and regression to identify potential new resistance genes. Three types of analyses were conducted separately over two conditions, tomatoes inoculated with Phytophthora infestans and not inoculated tomatoes. Predictions for 10 new resistance genes obtained by all applied methods were selected as being the most reliable and are therefore reported as potential resistance genes.ConclusionApplication of different statistical analyses to detect potential resistance genes reliably has shown to conduct interesting results that improve knowledge on molecular mechanisms of plant resistance to pathogens.  相似文献   

20.
介绍了在测控软件设计中用软件修正测量误差的方法,分析误差来源并建立数学模型。实验证明使用最小二乘算法来逼近校准曲线效果最好,通过软件修正测量误差使测量结果更精确,提高了测量仪器的测量精度。该项目的上位机软件以铁路应答器综合检测系统为背景,使用Labwindows/CVI作为开发平台。Labwindows/CVI是NI公...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号