首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
文本自动分类是文本信息处理中的一项基础性工作。将范例推理应用于文本分类中,并利用词语间的词共现信息从文本中抽取主题词和频繁词共现项目集,以及借助聚类算法对范例库进行索引,实现了基于范例推理的文本自动分类系统。实验表明,与基于TFIDF的文本表示方法和最近邻分类算法相比,基于词共现信息的文本表示方法和范例库的聚类索引能有效地改善分类的准确性和效率,从而拓宽了范例推理的应用领域。  相似文献   

2.
网络文本数据搜索引擎与搜索技术   总被引:3,自引:0,他引:3  
This paper describes the functions, characteristics and operating principles of search engines based on Web text, and the searching and data mining technologies for Web-based text information. Methods of computer-aided text clustering and abstacting are also given. Finally, it gives some guidelines for the assessment of searching quality.  相似文献   

3.
Text clustering is a well-known method for information retrieval and numerous methods for classifying words, documents or both together have been proposed. Frequently, textual data are encoded using vector models so the corpus is transformed in to a matrix of terms by documents; using this representation text clustering generates groups of similar objects on the basis of the presence/absence of the words in the documents. An alternative way to work on texts is to represent them as a network where nodes are entities connected by the presence and distribution of the words in the documents. In this work, after summarising the state of the art of text clustering we will present a new network approach to textual data. We undertake text co-clustering using methods developed for social network analysis. Several experimental results will be presented to demonstrate the validity of the approach and the advantages of this technique compared to existing methods.  相似文献   

4.
本文阐述了一种基于特征词聚类的降维方式,其主要思想就是把词在文本中的出现看成一个事件,先通过搜索算法计算每一个特征词的分布,合并对分类有相似作用的特征词,从而起到了特征降维的作用。最后通过实验测试分析,提出了一种改进的、考虑全局簇信息的相似度计算公式,将其应用到文本分类中,实验表明提高了文本分类的精度。  相似文献   

5.
巫桂梅 《科技通报》2012,28(7):148-151
研究文本快速准确分类的问题。同一词语在不同的语言环境下或者由不同的人使用可能代表不同的含义,这些词语在文本分类中的描述特征却极为相似。传统的文本分类方法是将文本表示成向量空间模型,向量空间模型只是从词语的出现频率角度构造,当文中出现一些多义词和同义词时就会出现分类延时明显准确性不高等特点。为此提出一种基于语义索引的文本主题匹配方法。将文本进行关键词的抽取后构造文档-词语矩阵,SVD分解后通过优化平衡的方法进行矩阵降维与相似度的计算,克服传统方法的弊端。实践证明,这种方法能大幅度降低同义词与多义词对文本分类时的影响,使文本按主题匹配分类时准确高效,实验效果明显提高。  相似文献   

6.
With the popularity of social platforms such as Sina Weibo, Tweet, etc., a large number of public events spread rapidly on social networks and huge amount of textual data are generated along with the discussion of netizens. Social text clustering has become one of the most critical methods to help people find relevant information and provides quality data for subsequent timely public opinion analysis. Most existing neural clustering methods rely on manual labeling of training sets and take a long time in the learning process. Due to the explosiveness and the large-scale of social media data, it is a challenge for social text data clustering to satisfy the timeliness demand of users. This paper proposes a novel unsupervised event-oriented graph clustering framework (EGC), which can achieve efficient clustering performance on large-scale datasets with less time overhead and does not require any labeled data. Specifically, EGC first mines the potential relations existing in social text data and transforms the textual data of social media into an event-oriented graph by taking advantage of graph structure for complex relations representation. Secondly, EGC uses a keyword-based local importance method to accurately measure the weights of relations in event-oriented graph. Finally, a bidirectional depth-first clustering algorithm based on the interrelations is proposed to cluster the nodes in event-oriented graph. By projecting the relations of the graph into a smaller domain, EGC achieves fast convergence. The experimental results show that the clustering performance of EGC on the Weibo dataset reaches 0.926 (NMI), 0.926 (AMI), 0.866 (ARI), which are 13%–30% higher than other clustering methods. In addition, the average query time of EGC clustered data is 16.7ms, which is 90% less than the original data.  相似文献   

7.
Arabic is a widely spoken language but few mining tools have been developed to process Arabic text. This paper examines the crime domain in the Arabic language (unstructured text) using text mining techniques. The development and application of a Crime Profiling System (CPS) is presented. The system is able to extract meaningful information, in this case the type of crime, location and nationality, from Arabic language crime news reports. The system has two unique attributes; firstly, information extraction that depends on local grammar, and secondly, dictionaries that can be automatically generated. It is shown that the CPS improves the quality of the data through reduction where only meaningful information is retained. Moreover, the Self Organising Map (SOM) approach is adopted in order to perform the clustering of the crime reports, based on crime type. This clustering technique is improved because only refined data containing meaningful keywords extracted through the information extraction process are inputted into it, i.e. the data are cleansed by removing noise. The proposed system is validated through experiments using a corpus collated from different sources; it was not used during system development. Precision, recall and F-measure are used to evaluate the performance of the proposed information extraction approach. Also, comparisons are conducted with other systems. In order to evaluate the clustering performance, three parameters are used: data size, loading time and quantization error.  相似文献   

8.
针对传统的微博聚类分析中,只单独针对微博阅读数、评论数等数据(下称微博结构化数据)进行分类或者单独针对由微博内容进行文本分词得到的分词数据(下称微博分词)进行分类的问题,本文采用了Kohonen聚类,研究结合微博结构化数据和微博分词的融合数据聚类的效果是否比单独对微博结构化数据或对微博分词聚类有所提高。实证数据实验结果显示,微博结构化数据单独聚类会出现一个类的标准差特别大(本文称为离群类),而对融合数据聚类,微博结构化数据则不会出现离群类;融合数据聚类结果对微博分词的影响不显著。  相似文献   

9.
Today, due to a vast amount of textual data, automated extractive text summarization is one of the most common and practical techniques for organizing information. Extractive summarization selects the most appropriate sentences from the text and provide a representative summary. The sentences, as individual textual units, usually are too short for major text processing techniques to provide appropriate performance. Hence, it seems vital to bridge the gap between short text units and conventional text processing methods.In this study, we propose a semantic method for implementing an extractive multi-document summarizer system by using a combination of statistical, machine learning based, and graph-based methods. It is a language-independent and unsupervised system. The proposed framework learns the semantic representation of words from a set of given documents via word2vec method. It expands each sentence through an innovative method with the most informative and the least redundant words related to the main topic of sentence. Sentence expansion implicitly performs word sense disambiguation and tunes the conceptual densities towards the central topic of each sentence. Then, it estimates the importance of sentences by using the graph representation of the documents. To identify the most important topics of the documents, we propose an inventive clustering approach. It autonomously determines the number of clusters and their initial centroids, and clusters sentences accordingly. The system selects the best sentences from appropriate clusters for the final summary with respect to information salience, minimum redundancy, and adequate coverage.A set of extensive experiments on DUC2002 and DUC2006 datasets was conducted for investigating the proposed scheme. Experimental results showed that the proposed sentence expansion algorithm and clustering approach could considerably enhance the performance of the summarization system. Also, comparative experiments demonstrated that the proposed framework outperforms most of the state-of-the-art summarizer systems and can impressively assist the task of extractive text summarization.  相似文献   

10.
范宇中  张玉峰 《情报科学》2003,21(1):103-105
本文结合运用信息管理和人工智能的原理与技术,探讨了文本知识的自动分类方法,包括:自动归类与聚类方法、基于实例的学习分类方法和基于特征值的元学习方法。  相似文献   

11.
介绍了Twitter(推特)免费竞争情报可视化工具,包括社会网络探索工具、微博信息聚合工具、文本分析工具和综合分析工具,结合实例阐述了如何利用这些工具获取竞争对手相关信息并进行可视化分析。旨在证明免费竞争情报可视化工具的应用潜力,并为中文微博开发类似产品提供借鉴。  相似文献   

12.
左晓飞  刘怀亮  范云杰  赵辉 《情报杂志》2012,31(5):180-184,191
传统的基于关键词的文本聚类算法,由于难以充分利用文本的语义特征,聚类效果差强人意。笔者提出一种概念语义场的概念,并给出了基于知网构建概念语义场的算法。即首先利用知网构造义原屏蔽层,将描述能力弱的义原屏蔽,然后在分析知网结构的基础上给出抽取相关概念的规则,以及简单概念语义场和复杂概念语义场的构造方法。最后给出一种基于概念语义场的文本聚类算法。该算法可充分利用特征词的语义关系,对不规则形状的聚类也有较好效果。实验表明,该算法可以有效提高聚类的质量。  相似文献   

13.
【目的/意义】深度学习是近几年来人工智能领域的研究热点之一,了解深度学习在信息组织与检索方面的研究现状,能为信息组织与检索的深入研究提供参考和借鉴。【方法/内容】通过对国内基于深度学习的信息组织与检索方向的相关文献进行梳理,剖析深度学习相关模型、阐述深度学习在信息组织与检索中的研究热点主题,并结合深度学习技术的特点和信息组织与检索的研究内容,对深度学习在信息组织与检索方向的应用前景进行预测。【结果/结论】研究表明,当前深度学习在信息组织与检索中的研究热点主要集中在智能信息抽取、自动文本分类、情感分析和文本聚类这四个主题,预测未来深度学习在信息组织与检索方向会朝着对异构信息处理、智能信息检索、个性化信息推荐等方向发展。  相似文献   

14.
A suite of computer programs has been developed for representing the full text of lengthy documents in vector form and classifying them by a clustering method. The programs have been applied to the full text of the Conventions and Agreements of the Council of Europe which consist of some 280,000 words in the English version and a similar number in the French. Results of the clustering experiments are presented in the form of dendrograms (tree diagrams) using both the treaty and article as the clustering unit. The conclusion is that vector techniques based on the full text provide an effective method of classifying legal documents.  相似文献   

15.
科技发展前沿信息监测与分析平台的构建   总被引:1,自引:0,他引:1       下载免费PDF全文
设计并实现了动态监测与追踪、反应快速、分析深入、功能集成、可视化展现的科技发展前沿信息监测与分析平台,综合运用数据库技术,网络信息抓取技术,本体技术和文本聚类技术,实现了准确高效的信息获取、不同科技领域概念的组织及其相互关系的揭示、科技主题关联关系及其变化趋势的挖掘等功能,旨在为国家和相关部门的战略决策者提供对科技发展相关知识结构、发展趋势等方面的分析,并以多种可视化方式总结与呈现的高效的战略决策支持服务;为相关情报研究业务提供有效的研究方法和分析工具,提高情报研究能力和效率。  相似文献   

16.
中国知识型企业内生核心竞争力实证研究   总被引:2,自引:0,他引:2  
通过采用文本信息挖掘的手段,从40位中国知识型企业的经营者访谈录中,归纳出21个知识型企业内生核心竞争力因子.从一个全新的角度来研究知识型企业核心竞争力.借助WARDS聚类分析方法,将中国成功知识型企业分为四大类;提出知识和人才这两个内生因子是知识型企业取得成功的基础.通过构建BP神经网路判别系统辅助知识型企业自我归类,帮助相关企业根据实际能力培养适合自身发展的内生核心竞争力因子.  相似文献   

17.
词干化、词形还原是英文文本处理中的一个重要步骤。本文利用3种聚类算法对两个Stemming算法和一个Lemmatization算法进行较为全面的实验。结果表明,Stemming和Lemmatization都可以提高英文文本聚类的聚类效果和效率,但对聚类结果的影响并不显著。相比于Snowball Stemmer和Stanford Lemmatizer,Porter Stemmer方法在Entropy和Pu-rity表现上更好,也更为稳定。  相似文献   

18.
We propose a new finite mixture model for clustering multiple-field documents, such as scientific literature with distinct fields: title, abstract, keywords, main text and references. This probabilistic model, which we call field independent clustering model (FICM), incorporates the distinct word distributions of each field to integrate the discriminative abilities of each field as well as to select the most suitable component probabilistic model for each field. We evaluated the performance of FICM by applying it to the problem of clustering three-field (title, abstract and MeSH) biomedical documents from TREC 2004 and 2005 Genomics tracks, and two-field (title and abstract) news reports from Reuters-21578. Experimental results showed that FICM outperformed the classical multinomial model and the multivariate Bernoulli model, being at a statistically significant level for all the three collections. These results indicate that FICM outperformed widely-used probabilistic models for document clustering by considering the characteristics of each field. We further showed that the component model, which is consistent with the nature of the corresponding field, achieved a better performance and considering the diversity of model setting also gave a further performance improvement. An extended abstract of parts of the work presented in this paper has appeared in Zhu et al. [Zhu, S., Takigawa, I., Zhang, S., & Mamitsuka, H. (2007). A probabilistic model for clustering text documents with multiple fields. In Proceedings of the 29th European conference on information retrieval, ECIR 2007. Lecture notes in computer science (Vol. 4425, pp. 331–342)].  相似文献   

19.
[目的/意义]针对在线旅游平台,提出一种挖掘游记主题标签,以代表性游记以及其中相关内容进行旅游信息推荐的新策略。[方法/过程]在利用文本挖掘技术,构建LDA主题模型,形成游记文本主题标签的基础上,通过游记代表度算法,筛选出针对相应标签的高描述度与高忠诚度游记进行旅游信息推荐,以客观表达文本聚类结果以及主题词之间的语义关系,并以蚂蜂窝旅游网中的"杭州游记"为例,加以验证。[结果/结论]结果表明,这种方式能挖掘出旅游者在历史旅游经历中真实的旅游热点及重点信息需求,针对高相似度游记的识别与聚类具有良好效果,对旅游信息细粒度推荐具有指导意义与实践意义。  相似文献   

20.
In recent years, mainly the functionality of services are described in a short natural text language. Keyword-based searching for web service discovery is not efficient for providing relevant results. When services are clustered according to the similarity, then it reduces search space and due to that search time is also reduced in the web service discovery process. So in the domain of web service clustering, basically topic modeling techniques like Latent Dirichlet Allocation (LDA), Correlated Topic Model (CTM), Hierarchical Dirichlet Processing (HDP), etc. are adopted for dimensionality reduction and feature representation of services in vector space. But as the services are described in the form of short text, so these techniques are not efficient due to lack of occurring words, limited content, etc. In this paper, the performance of web service clustering is evaluated by applying various topic modeling techniques with different clustering algorithms on the crawled dataset from ProgrammableWeb repository. Gibbs Sampling algorithm for Dirichlet Multinomial Mixture (GSDMM) model is proposed as a dimensionality reduction and feature representation of services to overcome the limitations of short text clustering. Results show that GSDMM with K-Means or Agglomerative clustering is outperforming all other methods. The performance of clustering is evaluated based on three extrinsic and two intrinsic evaluation criteria. Dimensionality reduction achieved by GSDMM is 90.88%, 88.84%, and 93.13% on three real-time crawled datasets, which is satisfactory as the performance of clustering is also enhanced by deploying this technique.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号