首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Methods for document clustering and topic modelling in online social networks (OSNs) offer a means of categorising, annotating and making sense of large volumes of user generated content. Many techniques have been developed over the years, ranging from text mining and clustering methods to latent topic models and neural embedding approaches. However, many of these methods deliver poor results when applied to OSN data as such text is notoriously short and noisy, and often results are not comparable across studies. In this study we evaluate several techniques for document clustering and topic modelling on three datasets from Twitter and Reddit. We benchmark four different feature representations derived from term-frequency inverse-document-frequency (tf-idf) matrices and word embedding models combined with four clustering methods, and we include a Latent Dirichlet Allocation topic model for comparison. Several different evaluation measures are used in the literature, so we provide a discussion and recommendation for the most appropriate extrinsic measures for this task. We also demonstrate the performance of the methods over data sets with different document lengths. Our results show that clustering techniques applied to neural embedding feature representations delivered the best performance over all data sets using appropriate extrinsic evaluation measures. We also demonstrate a method for interpreting the clusters with a top-words based approach using tf-idf weights combined with embedding distance measures.  相似文献   

2.
In this era, the proliferating role of social media in our lives has popularized the posting of the short text. The short texts contain limited context with unique characteristics which makes them difficult to handle. Every day billions of short texts are produced in the form of tags, keywords, tweets, phone messages, messenger conversations social network posts, etc. The analysis of these short texts is imperative in the field of text mining and content analysis. The extraction of precise topics from large-scale short text documents is a critical and challenging task. The conventional approaches fail to obtain word co-occurrence patterns in topics due to the sparsity problem in short texts, such as text over the web, social media like Twitter, and news headlines. Therefore, in this paper, the sparsity problem is ameliorated by presenting a novel fuzzy topic modeling (FTM) approach for short text through fuzzy perspective. In this research, the local and global term frequencies are computed through a bag-of-words (BOW) model. To remove the negative impact of high dimensionality on the global term weighting, the principal component analysis is adopted; thereafter the fuzzy c-means algorithm is employed to retrieve the semantically relevant topics from the documents. The experiments are conducted over the three real-world short text datasets: the snippets dataset is in the category of small dataset whereas the other two datasets, Twitter and questions, are the bigger datasets. Experimental results show that the proposed approach discovered the topics more precisely and performed better as compared to other state-of-the-art baseline topic models such as GLTM, CSTM, LTM, LDA, Mix-gram, BTM, SATM, and DREx+LDA. The performance of FTM is also demonstrated in classification, clustering, topic coherence and execution time. FTM classification accuracy is 0.95, 0.94, 0.91, 0.89 and 0.87 on snippets dataset with 50, 75, 100, 125 and 200 number of topics. The classification accuracy of FTM on questions dataset is 0.73, 0.74, 0.70, 0.68 and 0.78 with 50, 75, 100, 125 and 200 number of topics. The classification accuracies of FTM on snippets and questions datasets are higher than state-of-the-art baseline topic models.  相似文献   

3.
4.
Webpages are mainly distinguished by their topic (e.g., politics, sports etc.) and genre (e.g., blogs, homepages, e-shops, etc.). Automatic detection of webpage genre could considerably enhance the ability of modern search engines to focus on the requirements of the user’s information need. In this paper, we present an approach to webpage genre detection based on a fully-automated extraction of the feature set that represents the style of webpages. The features we propose (character n-grams of variable length and HTML tags) are language-independent and easily-extracted while they can be adapted to the properties of the still evolving web genres and the noisy environment of the web. Experiments based on two publicly-available corpora show that the performance of the proposed approach is superior in comparison to previously reported results. It is also shown that character n-grams are better features than words when the dimensionality increases while the binary representation is more effective than the term-frequency representation for both feature types. Moreover, we perform a series of cross-check experiments (e.g., training using a genre palette and testing using a different genre palette as well as using the features extracted from one corpus to discriminate the genres of the other corpus) to illustrate the robustness of our approach and its ability to capture the general stylistic properties of genre categories even when the feature set is not optimized for the given corpus.  相似文献   

5.
针对图书、期刊论文等数字文献文本特征较少而导致特征向量语义表达不够准确、分类效果差的问题,本文提出一种基于特征语义扩展的数字文献分类方法。该方法首先利用TF-IDF方法获取对数字文献文本表示能力较强、具有较高TF-IDF值的核心特征词;其次分别借助知网(Hownet)语义词典以及开放知识库维基百科(Wikipedia)对核心特征词集进行语义概念的扩展,以构建维度较低、语义丰富的概念向量空间;最后采用MaxEnt、SVM等多种算法构造分类器实现对数字文献的自动分类。实验结果表明:相比传统基于特征选择的短文本分类方法,该方法能有效地实现对短文本特征的语义扩展,提高数字文献分类的分类性能。  相似文献   

6.
Topic models often produce unexplainable topics that are filled with noisy words. The reason is that words in topic modeling have equal weights. High frequency words dominate the top topic word lists, but most of them are meaningless words, e.g., domain-specific stopwords. To address this issue, in this paper we aim to investigate how to weight words, and then develop a straightforward but effective term weighting scheme, namely entropy weighting (EW). The proposed EW scheme is based on conditional entropy measured by word co-occurrences. Compared with existing term weighting schemes, the highlight of EW is that it can automatically reward informative words. For more robust word weight, we further suggest a combination form of EW (CEW) with two existing weighting schemes. Basically, our CEW assigns meaningless words lower weights and informative words higher weights, leading to more coherent topics during topic modeling inference. We apply CEW to Dirichlet multinomial mixture and latent Dirichlet allocation, and evaluate it by topic quality, document clustering and classification tasks on 8 real world data sets. Experimental results show that weighting words can effectively improve the topic modeling performance over both short texts and normal long texts. More importantly, the proposed CEW significantly outperforms the existing term weighting schemes, since it further considers which words are informative.  相似文献   

7.
Today, due to a vast amount of textual data, automated extractive text summarization is one of the most common and practical techniques for organizing information. Extractive summarization selects the most appropriate sentences from the text and provide a representative summary. The sentences, as individual textual units, usually are too short for major text processing techniques to provide appropriate performance. Hence, it seems vital to bridge the gap between short text units and conventional text processing methods.In this study, we propose a semantic method for implementing an extractive multi-document summarizer system by using a combination of statistical, machine learning based, and graph-based methods. It is a language-independent and unsupervised system. The proposed framework learns the semantic representation of words from a set of given documents via word2vec method. It expands each sentence through an innovative method with the most informative and the least redundant words related to the main topic of sentence. Sentence expansion implicitly performs word sense disambiguation and tunes the conceptual densities towards the central topic of each sentence. Then, it estimates the importance of sentences by using the graph representation of the documents. To identify the most important topics of the documents, we propose an inventive clustering approach. It autonomously determines the number of clusters and their initial centroids, and clusters sentences accordingly. The system selects the best sentences from appropriate clusters for the final summary with respect to information salience, minimum redundancy, and adequate coverage.A set of extensive experiments on DUC2002 and DUC2006 datasets was conducted for investigating the proposed scheme. Experimental results showed that the proposed sentence expansion algorithm and clustering approach could considerably enhance the performance of the summarization system. Also, comparative experiments demonstrated that the proposed framework outperforms most of the state-of-the-art summarizer systems and can impressively assist the task of extractive text summarization.  相似文献   

8.
The class distribution of imbalanced data sets is skewed in practical application. As traditional clustering methods mainly are designed for improving the overall learning performance, the majority class usually tends to be clustered and the minority class which is more valuable maybe ignored. Moreover, existing clustering methods can be limited for the performance of imbalanced and high-dimensional domains. In this paper, we present one-step spectral rotation clustering for imbalanced high-dimensional data (OSRCIH) by integrating self-paced learning and spectral rotation clustering in a unified learning framework, where sample selection and dimensionality reduction are simultaneously considered with mutual and iterative update. Specifically, the imbalance problem is considered by selecting the same number of training samples from each intrinsic group of the training data, where the sample-weight vector is obtained by self-paced learning. Moreover, dimensionality reduction is conducted by combining subspace learning and feature selection. Experimental analysis on synthetic datasets and real datasets showed that OSRCIH could recognize and enhance the weight of important samples and features so as to avoid the clustering method in favor of the majority class and to improve effectively the clustering performance.  相似文献   

9.
本文阐述了一种基于特征词聚类的降维方式,其主要思想就是把词在文本中的出现看成一个事件,先通过搜索算法计算每一个特征词的分布,合并对分类有相似作用的特征词,从而起到了特征降维的作用。最后通过实验测试分析,提出了一种改进的、考虑全局簇信息的相似度计算公式,将其应用到文本分类中,实验表明提高了文本分类的精度。  相似文献   

10.
闫智  韩爽 《现代情报》2016,36(8):103-108
从深度利用数据资源和助力科研的角度,对比分析Web of Science和Google Scholar在化学领域中的性能差异。通过对比分析,从检索性能、分析性能、覆盖能力、引文跟踪、定题跟踪等方面进行比较和案例分析。发现Web of Science在化学领域具有权威性、稳定性、功能性等优势,其专业性能尚未充分利用。指出图书馆深入挖掘数据资源实用价值,提出从检索到发现的服务思路,推动技能服务和分析服务的双向服务模式。  相似文献   

11.
A methodology for automatically identifying and clustering semantic features or topics in a heterogeneous text collection is presented. Textual data is encoded using a low rank nonnegative matrix factorization algorithm to retain natural data nonnegativity, thereby eliminating the need to use subtractive basis vector and encoding calculations present in other techniques such as principal component analysis for semantic feature abstraction. Existing techniques for nonnegative matrix factorization are reviewed and a new hybrid technique for nonnegative matrix factorization is proposed. Performance evaluations of the proposed method are conducted on a few benchmark text collections used in standard topic detection studies.  相似文献   

12.
鉴于在目前的技术机会识别中存在研判的创新路径往往较为抽象和模糊,并在很多情况下需领域专家参与解读的问题,以冷库技术为例,研究构建基于文本挖掘、机器学习算法及多维空间专利地图的技术创新路径识别模型。首先,构建技术创新路径识别框架对相关专利文献进行分词、清洗等预处理并建立知识图谱;其次,采用融合词频-逆文档频率(TF-IDF)文本挖掘方法对专利文档提取关键词,继而采用隐含狄利克雷分布(LDA)算法对主题聚类降维并萃取创新维度;再次,依据目标技术问题和目标优选创新法则耦合变换于多维空间专利地图并具象出具有现实意义、有价值前景的创新路径;最后,利用可拓学计算各创新路径综合关联度评级优选。以期减少创新成本、提高创新效率,为企业精准开展技术创新、不断提升核心竞争力提供决策参考。  相似文献   

13.
In this paper, the task of text segmentation is approached from a topic modeling perspective. We investigate the use of two unsupervised topic models, latent Dirichlet allocation (LDA) and multinomial mixture (MM), to segment a text into semantically coherent parts. The proposed topic model based approaches consistently outperform a standard baseline method on several datasets. A major benefit of the proposed LDA based approach is that along with the segment boundaries, it outputs the topic distribution associated with each segment. This information is of potential use in applications such as segment retrieval and discourse analysis. However, the proposed approaches, especially the LDA based method, have high computational requirements. Based on an analysis of the dynamic programming (DP) algorithm typically used for segmentation, we suggest a modification to DP that dramatically speeds up the process with no loss in performance. The proposed modification to the DP algorithm is not specific to the topic models only; it is applicable to all the algorithms that use DP for the task of text segmentation.  相似文献   

14.
王琳  郑月龙  闫志伟 《科学学研究》2018,36(7):1305-1313
与知识服务机构建立联结是制造企业获取服务创新资源的重要路径,多样化联结究竟会对制造企业服务创新绩效产生何种影响,是一个缺乏关注但亟待回答的问题。鉴于此,本文以重庆203个制造企业为样本,考察知识服务机构联结伙伴多样化、联结地域多样化对制造企业服务创新绩效的直接影响,并对跨界搜索经验的调节效应进行检验。研究结果表明:联结伙伴多样化对制造企业服务创新绩效具有倒U效应;联结地域多样化对制造企业服务创新绩效存在正向线性效应,但没有曲线效应;跨界搜索经验调节联结伙伴多样化和制造企业服务创新绩效间的倒U型关系;跨界搜索经验增强了联结地域多样化对制造企业服务创新绩效的正向影响,而曲线关系的调节效应未得到证实。本研究结论推进了组织间关系视角下的制造企业服务创新理论研究,为制造企业选择合适的知识服务机构联结多样性进行服务创新提供了启示。, 与知识服务机构建立联结是制造企业获取服务创新资源的重要路径,多样化联结究竟会对制造企业服务创新绩效产生何种影响,是一个缺乏关注但亟待回答的问题。鉴于此,本文以重庆203个制造企业为样本,考察知识服务机构联结伙伴多样化、联结地域多样化对制造企业服务创新绩效的直接影响,并对跨界搜索经验的调节效应进行检验。研究结果表明:联结伙伴多样化对制造企业服务创新绩效具有倒U效应;联结地域多样化对制造企业服务创新绩效存在正向线性效应,但没有曲线效应;跨界搜索经验调节联结伙伴多样化和制造企业服务创新绩效间的倒U型关系;跨界搜索经验增强了联结地域多样化对制造企业服务创新绩效的正向影响,而曲线关系的调节效应未得到证实。本研究结论推进了组织间关系视角下的制造企业服务创新理论研究,为制造企业选择合适的知识服务机构联结多样性进行服务创新提供了启示。  相似文献   

15.
One of the important problems in text classification is the high dimensionality of the feature space. Feature selection methods are used to reduce the dimensionality of the feature space by selecting the most valuable features for classification. Apart from reducing the dimensionality, feature selection methods have potential to improve text classifiers’ performance both in terms of accuracy and time. Furthermore, it helps to build simpler and as a result more comprehensible models. In this study we propose new methods for feature selection from textual data, called Meaning Based Feature Selection (MBFS) which is based on the Helmholtz principle from the Gestalt theory of human perception which is used in image processing. The proposed approaches are extensively evaluated by their effect on the classification performance of two well-known classifiers on several datasets and compared with several feature selection algorithms commonly used in text mining. Our results demonstrate the value of the MBFS methods in terms of classification accuracy and execution time.  相似文献   

16.
【目的/意义】从海量自助餐用户评论数据中抽取有效关键词构建主题和主题词,协助商家了解用户口碑, 进而更好的改善餐饮行业的管理水平。【方法/过程】通过融合TF-IDF、TextRank和LMKE三种不同的关键词抽取 方法获取最优关键词,再对抽取的关键词进行语义聚类、主题识别、主题词挖掘和主题权重计算,最后在采集的美 团数据集上进行验证方法的有效性。【结果/结论】实验结果表明,三种关键词抽取方法的融合比单个关键词算法效 要好,文本评论聚类后的主题分别是:味道、菜品、环境、服务、价格,主题的重要程度依次是:味道 36.2%、服务 22.9%、价格15.1%、环境13.6%、菜品12.2%。实验结果证实,通过该方法能够有效识别和构建主题及主题词,并计算 出用户对于不同主题关注的重点内容,同时为餐饮行业主题及主题词挖掘和应用研究提供了一定的理论和技术基 础。【创新/局限】提出一种半监督语义聚类的主题识别、主题词构建和主题权重评估方法;不足之处在于本次实验 仅以武汉地区的美食自助餐评论为主,其构建的主题适用性范围有限。  相似文献   

17.
The vector space model (VSM) is a textual representation method that is widely used in documents classification. However, it remains to be a space-challenging problem. One attempt to alleviate the space problem is by using dimensionality reduction techniques, however, such techniques have deficiencies such as losing some important information. In this paper, we propose a novel text classification method that neither uses VSM nor dimensionality reduction techniques. The proposed method is a space efficient method that utilizes the first order Markov model for hierarchical Arabic text classification. For each category and sub-category, a Markov chain model is prepared based on the neighboring characters sequences. The prepared models are then used for scoring documents for classification purposes. For evaluation, we used a hierarchical Arabic text data collection that contains 11,191 documents that belong to eight topics distributed into 3-levels. The experimental results show that the Markov chains based method significantly outperforms the baseline system that employs the latent semantic indexing (LSI) method. That is, the proposed method enhances the F1-measure by 3.47%. The novelty of this work lies on the idea of decomposing words into sequences of characters, which found to be a promising approach in terms of space and accuracy. Based on our best knowledge, this is the first attempt to conduct research for hierarchical Arabic text classification with such relatively large data collection.  相似文献   

18.
文本自动分类是文本信息处理中的一项基础性工作。将范例推理应用于文本分类中,并利用词语间的词共现信息从文本中抽取主题词和频繁词共现项目集,以及借助聚类算法对范例库进行索引,实现了基于范例推理的文本自动分类系统。实验表明,与基于TFIDF的文本表示方法和最近邻分类算法相比,基于词共现信息的文本表示方法和范例库的聚类索引能有效地改善分类的准确性和效率,从而拓宽了范例推理的应用领域。  相似文献   

19.
Most document clustering algorithms operate in a high dimensional bag-of-words space. The inherent presence of noise in such representation obviously degrades the performance of most of these approaches. In this paper we investigate an unsupervised dimensionality reduction technique for document clustering. This technique is based upon the assumption that terms co-occurring in the same context with the same frequencies are semantically related. On the basis of this assumption we first find term clusters using a classification version of the EM algorithm. Documents are then represented in the space of these term clusters and a multinomial mixture model (MM) is used to build document clusters. We empirically show on four document collections, Reuters-21578, Reuters RCV2-French, 20Newsgroups and WebKB, that this new text representation noticeably increases the performance of the MM model. By relating the proposed approach to the Probabilistic Latent Semantic Analysis (PLSA) model we further propose an extension of the latter in which an extra latent variable allows the model to co-cluster documents and terms simultaneously. We show on these four datasets that the proposed extended version of the PLSA model produces statistically significant improvements with respect to two clustering measures over all variants of the original PLSA and the MM models.  相似文献   

20.
基于改进特征提取及聚类的网络评论挖掘研究   总被引:1,自引:0,他引:1  
[目的/意义]针对信息过载条件下中文网络产品评论中特征提取性能低以及特征聚类中初始中心点的选取问题。[方法/过程]本研究提出采用基于权重的改进Apriori算法产生候选产品特征集合,再根据独立支持度、频繁项名词非特征规则及基于网络搜索引擎的PMI算法对候选产品特征集合进行过滤。并以基于HowNet的语义相似度和特征观点共现作为衡量产品特征之间关联程度的特征,提出一种改进K-means聚类算法对产品特征进行聚类。[结果/结论]实验结果表明,在特征提取阶段,查准率为69%,查全率为92.64%,综合值达到79.07%。在特征聚类阶段,本文提出的改进K-means算法相对传统算法具有更优的挖掘性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号