首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 328 毫秒
1.
【目的】针对微博情感分类时未标注样本多和已标注集少的问题,提出一种新的方法。【方法】在协同训练算法的基础上引入主动学习思想,从低置信度样本中选取最有价值的、信息含量大的,提交标注,标注完后添加到训练集中,重新训练分类器进行情感分类。【结果】使用不同的数据集进行实验,实验结果表明该方法所构建的分类器性能优于其他方法,分类准确率明显提高。特别是在已标注样本占40%的情况下,提升5%左右。【局限】在协同训练过程中使用随机特征子空间生成方法不能保证每次构建的两个分类器都是强分类器,因此未能充分地满足协同训练的假设条件。【结论】引入主动学习思想后,能够解决协同训练对低置信度样本处理的不足,进而增强分类器性能,提高分类准确率。  相似文献   

2.
乔建忠 《图书情报工作》2013,57(14):114-120
针对主题爬行技术中的单一分类算法在面对多主题Web抓取和分类需求时泛化能力不强的局限,设计一种利用多种强分类算法形成的分类器组合,主题爬行器根据当前主题任务在线评估并为分类器排名,从中选择最优分类器分类的策略,并开展在多个主题抓取任务下的分类实验,比较每种分类算法的准确率和组合后的平均分类准确率以及对分类效率等评价指标的综合分析,结果证明该策略对领域局域性有所克服,普适性较强。  相似文献   

3.
文章利用LDA模型进行文本降维和特征提取,并将传统分类算法置于集成学习框架下进行训练,以探讨是否能提高单一分类算法的分类准确度,并获得较优的分类效果,使LDA模型能够发挥更高的性能和效果,从而为文本分类精度的提高服务。同时,以Web of Science为数据来源,依据其学科类别划分标准,建立涵盖6个主题的实验文本集,利用Weka作为实验工具,以平均F值作为评价指标,对比分析了朴素贝叶斯、逻辑回归、支持向量机、K近邻算法4种传统分类算法以及AdaBoost、Bagging、Random Subspace 3种集成学习算法的分类效果。从总体上看,通过“同质集成”集成后的文本分类准确率高于单个分类器的分类准确率;利用LDA模型进行文本降维和特征提取,将朴素贝叶斯作为基分类器,并利用Bagging进行集成训练,分类效果最优,实现了“全局最优”。  相似文献   

4.
[目的/意义]实现学术查询意图的自动识别,提高学术搜索引擎的效率。[方法/过程]结合已有查询意图特征和学术搜索特点,从基本信息、特定关键词、实体和出现频率4个层面对查询表达式进行特征构造,运用Naive Bayes、Logistic回归、SVM、Random Forest四种分类算法进行查询意图自动识别的预实验,计算不同方法的准确率、召回率和F值。提出了一种将Logistic回归算法所预测的识别结果扩展到大规模数据集、提取"关键词类"特征的方法构建学术查询意图识别的深度学习两层分类器。[结果/结论]两层分类器的宏平均F1值为0.651,优于其他算法,能够有效平衡不同学术查询意图的类别准确率与召回率效果。两层分类器在学术探索类的效果最好,F1值为0.783。  相似文献   

5.
文本分类是文本挖掘的基础和核心。构建一个分类准确而且稳定的文本分类器是文本分类的关键,很多学者提出了不同的文本分类器模型和算法。在现有的分类器评估方法中,关心的只是分类准确率,而对稳定性这个重要的评价标准却没有涉及。本文提出使用开放测试和封闭测试的准确性指标的比值作为衡量文本分类器稳定性的评估标准。通过文献数据验证以及在所建构的贝叶斯分类器实验平台MBNC上进行的检验表明,用这种标准评价文本分类器具有其合理性。  相似文献   

6.
KNN算法是文本分类中广泛应用的算法.作为一种基于实例的算法,训练样本的数量和分布位置影响KNN分类器分类性能.合理的样本剪裁以及样本赋权方法可以提高分类器的效率.提出了一种基于样本分布状况的KNN改进模型.首先基于样本位置对训练集进行删减以节约计算开销,然后针对类偏斜现象对分类器的赋权方式进行优化,改善k近邻选择时大类别、高密度训练样本的占优现象.试验结果表明,本文提出的改进KNN文本分类算法提高了KNN的分类效率.  相似文献   

7.
文本分类器准确性评估方法   总被引:10,自引:3,他引:10  
程泽凯  林士敏 《情报学报》2004,23(5):631-636
随着计算机网络与信息技术的飞速发展 ,信息极大丰富而知识相对匮乏的状况在加剧。文本挖掘正成为目前研究者关注的焦点。文本分类是文本挖掘的基础和核心。构建一个分类准确的文本分类器是文本分类的关键。现在有很多文本分类的算法 ,在不同的领域里取得了较好的效果。如何更加客观地评估分类器的性能 ,是目前值得研究的方向之一。结合作者的实际工作 ,本文列出目前常用的分类准确性测试和评估方法 ,简单对评估方法进行比较分析。文末提出了对准确性评估的一些改进设想。  相似文献   

8.
分布式检索中查询结果合并方法研究   总被引:2,自引:0,他引:2  
查询结果合并是分布式信息检索中的一个重要步骤,其合并方法的选择直接影响检索结果的质量.本文首先对两种查询结果合并算法,即经典的CORI算法与新近提出的回归分析与选择下载相结合的Hybrid算法的基本原理进行了讨论研究,然后通过实验对这两种算法的性能进行了深入的比较分析.在实验过程中,采用平均准确率指标对检索结果进行评价.通过比较两种合并算法产生的平均准确率来评价二者的性能.结果表明,在不同的实验环境下,新的Hybrid算法的性能都要优于CORI算法.选择Hybrid算法进行查询结果合并能够取得令人满意的结果,比较适合作为分布式检索的查询结果合并算法.  相似文献   

9.
所谓图像自动分类是指利用图像自动分类器把待分类的图像分配到预定义的图像类的过程。用于图像自动分类的方法有多种。其中K近邻算法是一种基于实例学习的方法,是一种较理想的自动分类器。本文在它的基础上提出了图像自动分类模型,整个图像自动分类过程包括图像预处理、特征表示、机器学习和图像分类4个步骤。表1。图1。参考文献13。  相似文献   

10.
基于属性相关性分析的贝叶斯分类模型   总被引:1,自引:0,他引:1  
朴素贝叶斯分类器是一种简单而有效的概率分类方法,然而其属性独立性假设在现实世界中多数不能成立。为改进其分类性能,近几年已有大量研究致力于构建能反映属性之间依赖关系的模型。本文提出一种向量相关性度量方法,特征向量属于类的的概率由向量相关度及其属性概率计算。向量相关度可通过本文给出的一个公式进行估计。实验结果表明,使用这种方法构建的分类模型其分类性能明显优于朴素贝叶斯,和其他同类算法相比也有一定提高。  相似文献   

11.
Automatic document classification can be used to organize documents in a digital library, construct on-line directories, improve the precision of web searching, or help the interactions between user and search engines. In this paper we explore how linkage information inherent to different document collections can be used to enhance the effectiveness of classification algorithms. We have experimented with three link-based bibliometric measures, co-citation, bibliographic coupling and Amsler, on three different document collections: a digital library of computer science papers, a web directory and an on-line encyclopedia. Results show that both hyperlink and citation information can be used to learn reliable and effective classifiers based on a kNN classifier. In one of the test collections used, we obtained improvements of up to 69.8% of macro-averaged F 1 over the traditional text-based kNN classifier, considered as the baseline measure in our experiments. We also present alternative ways of combining bibliometric based classifiers with text based classifiers. Finally, we conducted studies to analyze the situation in which the bibliometric-based classifiers failed and show that in such cases it is hard to reach consensus regarding the correct classes, even for human judges.  相似文献   

12.
王效岳  白如江 《情报学报》2006,25(4):475-480
结合粗糙集的属性约简和神经网络的分类机理,提出了一种混合算法。首先应用粗糙集理论的属性约简作为预处理器,把冗余的属性从决策表中删去,然后运用神经网络进行分类。这样可以大大降低向量维数,克服粗糙集对于决策表噪声比较敏感的缺点。试验结果表明,与朴素贝叶斯、SVM、KNN传统分类方法相比,该方法在保持分类精度的基础上,分类速度有明显的提高,体现出较好的稳定性和容错性,尤其适用于特征向量多且难以分类的文本。  相似文献   

13.
结合粗糙集的属性约简和RBF神经网络的分类机理,提出一种新的文本分类混合算法。试验结果表明,与朴素贝叶斯、SVM、kNN传统分类方法相比,该方法在保持分类精度的基础上,分类速度有明显提高,体现出较好的稳定性和容错性,尤其适用于特征向量多且难以分类的文本。  相似文献   

14.
In this paper, we quantify the existence of concept drift in patent data, and examine its impact on classification accuracy. When developing algorithms for classifying incoming patent applications with respect to their category in the International Patent Classification (IPC) hierarchy, a temporal mismatch between training data and incoming documents may deteriorate classification results. We measure the effect of this temporal mismatch and aim to tackle it by optimal selection of training data. To illustrate the various aspects of concept drift on IPC class level, we first perform quantitative analyses on a subset of English abstracts extracted from patent documents in the CLEF-IP 2011 patent corpus. In a series of classification experiments, we then show the impact of temporal variation on the classification accuracy of incoming applications. We further examine what training data selection method, combined with our classification approach yields the best classifier; and how combining different text representations may improve patent classification. We found that using the most recent data is a better strategy than static sampling but that extending a set of recent training data with older documents does not harm classification performance. In addition, we confirm previous findings that using 2-skip-2-grams on top of the bag of unigrams structurally improves patent classification. Our work is an important contribution to the research into concept drift for text classification, and to the practice of classifying incoming patent applications.  相似文献   

15.
In the field of scientometrics, impact indicators and ranking algorithms are frequently evaluated using unlabelled test data comprising relevant entities (e.g., papers, authors, or institutions) that are considered important. The rationale is that the higher some algorithm ranks these entities, the better its performance. To compute a performance score for an algorithm, an evaluation measure is required to translate the rank distribution of the relevant entities into a single-value performance score. Until recently, it was simply assumed that taking the average rank (of the relevant entities) is an appropriate evaluation measure when comparing ranking algorithms or fine-tuning algorithm parameters.With this paper we propose a framework for evaluating the evaluation measures themselves. Using this framework the following questions can now be answered: (1) which evaluation measure should be chosen for an experiment, and (2) given an evaluation measure and corresponding performance scores for the algorithms under investigation, how significant are the observed performance differences?Using two publication databases and four test data sets we demonstrate the functionality of the framework and analyse the stability and discriminative power of the most common information retrieval evaluation measures. We find that there is no clear winner and that the performance of the evaluation measures is highly dependent on the underlying data. Our results show that the average rank is indeed an adequate and stable measure. However, we also show that relatively large performance differences are required to confidently determine if one ranking algorithm is significantly superior to another. Lastly, we list alternative measures that also yield stable results and highlight measures that should not be used in this context.  相似文献   

16.
针对传统文本分类算法在向量空间模型表示下存在向量高维、稀疏以及忽略特征语义相关性等缺陷所导致的分类效率低和精度不高的问题,以知网(HowNet)为知识库,构建语义概念向量模型SCVM(Semantic Concept Vector Model)表示文本,根据概念语义及上下文背景对同义词进行归并,对多义词进行排歧,提出基于概念簇的文本分类算法TCABCC (Text Classification Algorithm Based on the Concept of Clusters),通过改进传统KNN,用概念簇表示各个类别训练样本,使相似度的计算基于文本概念向量和类别概念簇。实验结果表明,该算法构造的分类器在效率和性能上均比传统KNN有较大的提高。  相似文献   

17.
基于粗糙集加权的文本分类方法研究   总被引:6,自引:0,他引:6  
文本自动分类是当前智能信息处理中一类重要的研究课题。本文分析了基于统计理论的文本分类的基本特点,提出采用可变精度粗糙集模型中的分类质量构造新的特征词权重计算公式。这种新的加权方法,相对于广泛使用的逆文本频率加权方法,大大改进了文本样本在整个空间中的分布,使得类内距离减少,类间距离增大,在理论上将提高样本的可分性。最后利用支持向量机和K近邻两种分类器,验证了这种新的加权方法对分类效果确实有所提高。  相似文献   

18.
Subject classification arises as an important topic for bibliometrics and scientometrics, searching to develop reliable and consistent tools and outputs. Such objectives also call for a well delimited underlying subject classification scheme that adequately reflects scientific fields. Within the broad ensemble of classification techniques, clustering analysis is one of the most successful.Two clustering algorithms based on modularity – the VOS and Louvain methods – are presented here for the purpose of updating and optimizing the journal classification of the SCImago Journal & Country Rank (SJR) platform. We used network analysis and Pajek visualization software to run both algorithms on a network of more than 18,000 SJR journals combining three citation-based measures of direct citation, co-citation and bibliographic coupling. The set of clusters obtained was termed through category labels assigned to SJR journals and significant words from journal titles.Despite the fact that both algorithms exhibited slight differences in performance, the results show a similar behaviour in grouping journals. Consequently, they are deemed to be appropriate solutions for classification purposes. The two newly generated algorithm-based classifications were compared to other bibliometric classification systems, including the original SJR and WoS Subject Categories, in order to validate their consistency, adequacy and accuracy. In addition to some noteworthy differences, we found a certain coherence and homogeneity among the four classification systems analysed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号