首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
As a hot spot these years, cross-domain sentiment classification aims to learn a reliable classifier using labeled data from a source domain and evaluate the classifier on a target domain. In this vein, most approaches utilized domain adaptation that maps data from different domains into a common feature space. To further improve the model performance, several methods targeted to mine domain-specific information were proposed. However, most of them only utilized a limited part of domain-specific information. In this study, we first develop a method of extracting domain-specific words based on the topic information derived from topic models. Then, we propose a Topic Driven Adaptive Network (TDAN) for cross-domain sentiment classification. The network consists of two sub-networks: a semantics attention network and a domain-specific word attention network, the structures of which are based on transformers. These sub-networks take different forms of input and their outputs are fused as the feature vector. Experiments validate the effectiveness of our TDAN on sentiment classification across domains. Case studies also indicate that topic models have the potential to add value to cross-domain sentiment classification by discovering interpretable and low-dimensional subspaces.  相似文献   

2.
Aspect-based sentiment analysis allows one to compute the sentiment for an aspect in a certain context. One problem in this analysis is that words possibly carry different sentiments for different aspects. Moreover, an aspect’s sentiment might be highly influenced by the domain-specific knowledge. In order to tackle these issues, in this paper, we propose a hybrid solution for sentence-level aspect-based sentiment analysis using A Lexicalized Domain Ontology and a Regularized Neural Attention model (ALDONAr). The bidirectional context attention mechanism is introduced to measure the influence of each word in a given sentence on an aspect’s sentiment value. The classification module is designed to handle the complex structure of a sentence. The manually created lexicalized domain ontology is integrated to utilize the field-specific knowledge. Compared to the existing ALDONA model, ALDONAr uses BERT word embeddings, regularization, the Adam optimizer, and different model initialization. Moreover, its classification module is enhanced with two 1D CNN layers providing superior results on standard datasets.  相似文献   

3.
庞庆华  董显蔚  周斌  付眸 《情报科学》2022,40(5):111-117
【目的/意义】负面在线评论已成为商家重要的经营决策信息,对了解客户消费满意度、改善产品和服务质量 具有重要意义。【方法/过程】该文将情感分析和关键词抽取相结合,提出一种基于BiGRU-CNN 和 TextRank的在 线评论负面关键词抽取方法,即首先对在线评论文本数据进行清洗,然后构建 BiGRU- CNN 情感分类模型对在 线评论进行情感分析,最后采取TextRank 方法抽取情感分析得到的负面评论中的关键词。利用这种方法,对十个 产品与服务类别的6万余条消费者在线评论文本数据进行实证分析。【结果/结论】实验结果表明,该方法能准确判 别客户负面在线评论情感倾向,F1值达92.41%,并且负面在线评论关键词抽取结果能较好帮助商家完善产品质量 和服务。【创新/局限】提出一种结合双向GRU 和CNN 结合的情感分类模型,在此基础上基于TextRank 方法抽取 情感分析得到的负面评论中的关键词,进一步提升模型对于在线评论情感分析的准确性。  相似文献   

4.
With the rapid development in mobile computing and Web technologies, online hate speech has been increasingly spread in social network platforms since it's easy to post any opinions. Previous studies confirm that exposure to online hate speech has serious offline consequences to historically deprived communities. Thus, research on automated hate speech detection has attracted much attention. However, the role of social networks in identifying hate-related vulnerable community is not well investigated. Hate speech can affect all population groups, but some are more vulnerable to its impact than others. For example, for ethnic groups whose languages have few computational resources, it is a challenge to automatically collect and process online texts, not to mention automatic hate speech detection on social media. In this paper, we propose a hate speech detection approach to identify hatred against vulnerable minority groups on social media. Firstly, in Spark distributed processing framework, posts are automatically collected and pre-processed, and features are extracted using word n-grams and word embedding techniques such as Word2Vec. Secondly, deep learning algorithms for classification such as Gated Recurrent Unit (GRU), a variety of Recurrent Neural Networks (RNNs), are used for hate speech detection. Finally, hate words are clustered with methods such as Word2Vec to predict the potential target ethnic group for hatred. In our experiments, we use Amharic language in Ethiopia as an example. Since there was no publicly available dataset for Amharic texts, we crawled Facebook pages to prepare the corpus. Since data annotation could be biased by culture, we recruit annotators from different cultural backgrounds and achieved better inter-annotator agreement. In our experimental results, feature extraction using word embedding techniques such as Word2Vec performs better in both classical and deep learning-based classification algorithms for hate speech detection, among which GRU achieves the best result. Our proposed approach can successfully identify the Tigre ethnic group as the highly vulnerable community in terms of hatred compared with Amhara and Oromo. As a result, hatred vulnerable group identification is vital to protect them by applying automatic hate speech detection model to remove contents that aggravate psychological harm and physical conflicts. This can also encourage the way towards the development of policies, strategies, and tools to empower and protect vulnerable communities.  相似文献   

5.
6.
With the emergence and development of deep generative models, such as the variational auto-encoders (VAEs), the research on topic modeling successfully extends to a new area: neural topic modeling, which aims to learn disentangled topics to understand the data better. However, the original VAE framework had been shown to be limited in disentanglement performance, bringing their inherent defects to a neural topic model (NTM). In this paper, we put forward that the optimization objectives of contrastive learning are consistent with two important goals (alignment and uniformity) of well-disentangled topic learning. Also, the optimization objectives of contrastive learning are consistent with two key evaluation measures for topic models, topic coherence and topic diversity. So, we come to the important conclusion that alignment and uniformity of disentangled topic learning can be quantified with topic coherence and topic diversity. Accordingly, we are inspired to propose the Contrastive Disentangled Neural Topic Model (CNTM). By representing both words and topics as low-dimensional vectors in the same embedding space, we apply contrastive learning to neural topic modeling to produce factorized and disentangled topics in an interpretable manner. We compare our proposed CNTM with strong baseline models on widely-used metrics. Our model achieves the best topic coherence scores under the most general evaluation setting (100% proportion topic selected) with 25.0%, 10.9%, 24.6%, and 51.3% improvements above the second-best models’ scores reported on four datasets of 20 Newsgroups, Web Snippets, Tag My News, and Reuters, respectively. Our method also gets the second-best topic diversity scores on the dataset of 20Newsgroups and Web Snippets. Our experimental results show that CNTM can effectively leverage the disentanglement ability from contrastive learning to solve the inherent defect of neural topic modeling and obtain better topic quality.  相似文献   

7.
Traditional topic models are based on the bag-of-words assumption, which states that the topic assignment of each word is independent of the others. However, this assumption ignores the relationship between words, which may hinder the quality of extracted topics. To address this issue, some recent works formulate documents as graphs based on word co-occurrence patterns. It assumes that if two words co-occur frequently, they should have the same topic. Nevertheless, it introduces noise edges into the model and thus hinders topic quality since two words co-occur frequently do not mean that they are on the same topic. In this paper, we use the commonsense relationship between words as a bridge to connect the words in each document. Compared to word co-occurrence, the commonsense relationship can explicitly imply the semantic relevance between words, which can be utilized to filter out noise edges. We use a relational graph neural network to capture the relation information in the graph. Moreover, manifold regularization is utilized to constrain the documents’ topic distributions. Experimental results on a public dataset show that our method is effective at extracting topics compared to baseline methods.  相似文献   

8.
“We the Media” networks are real time and open, and such networks lack a gatekeeper system. As netizens’ comments on emergency events are disseminated, negative public opinion topics and confrontations concerning those events also spread widely on “We the Media” networks. Gradually, this phenomenon has attracted scholarly attention, and all social circles attach importance to the phenomenon as well. In existing topic detection studies, a topic is mainly defined as an "event" from the perspective of news-media information flow, but in the “We the Media” era, there are often many different views or topics surrounding a specific public opinion event. In this paper, a study on the detection of public opinion topics in “We the Media” networks is presented, starting with the characteristics of the elements found in public opinions on “We the Media” networks; such public opinions are multidimensional, multilayered and possess multiple attributes. By categorizing the elements’ attributes using social psychology and system science categories as references, we build a multidimensional network model oriented toward the topology of public opinions on “We the Media” networks. Based on the real process by which multiple topics concerning the same event are generated and disseminated, we designed a topic detection algorithm that works on these multidimensional public opinion networks. As a case study, the “Explosion in Tianjin Port on August 12, 2015″ accident was selected to conduct empirical analyses on the algorithm's effectiveness. The theoretical and empirical research findings of this paper are summarized along the following three aspects. 1. The multidimensional network model can be used to effectively characterize the communication characteristics of multiple topics on “We the Media” networks, and it provided the modeling ideas for the present paper and for other related studies on “We the Media” public opinion networks. 2. Using the multidimensional topic detection algorithm, 70% of the public opinion topics concerning the case study event were effectively detected, which shows that the algorithm is effective at detecting topics from the information flow on “We the Media” networks. 3. By defining the psychological scores of single and paired Chinese keywords in public opinion information, the topic detection algorithm can also be used to judge the sentiment tendencies of each topic, which can facilitate a timely understanding of public opinion and reveal negative topics under discussion on “We the Media” networks.  相似文献   

9.
张雷  谭慧雯  张璇  韩龙 《情报科学》2022,40(3):144-151
【目的/意义】构建高校师德舆情微博用户评论LDA模型,可以更精准识别舆情演化特征和分析关键主题传 播路径,帮助高校和相关部门更为有效地进行舆情监管和舆情引导。【方法/过程】本文以“天津大学一教授学术造 假”事件为例,基于 LDA模型构建高校师德舆情下微博用户主题生成模型,采用困惑度评价指标确定 LDA模型最 优主题数,采用信息熵确定每一主题在不同日期的主题强度,通过关键词共现知识图谱、词云展现舆情话题的演 变,最后基于主题相似度确定主题传播路径。【结果/结论】LDA模型和信息熵可以解析出网络用户群体关注的重要 主题热点,精准识别舆情演化特征,识别主题最优传播路径进行舆论引导,对爆发的舆情实现预测和管制优化。【创 新/局限】文章创新性地构建高校学术道德舆情的LDA主题模型,有效确定微博用户群体主题、识别舆情演化特征、 分析主题间传播路径,具有普适性;进一步扩大高校师德其他舆情分析及结合网络舆情情感分析为下一步的研究 内容。  相似文献   

10.
Transfer learning utilizes labeled data available from some related domain (source domain) for achieving effective knowledge transformation to the target domain. However, most state-of-the-art cross-domain classification methods treat documents as plain text and ignore the hyperlink (or citation) relationship existing among the documents. In this paper, we propose a novel cross-domain document classification approach called Link-Bridged Topic model (LBT). LBT consists of two key steps. Firstly, LBT utilizes an auxiliary link network to discover the direct or indirect co-citation relationship among documents by embedding the background knowledge into a graph kernel. The mined co-citation relationship is leveraged to bridge the gap across different domains. Secondly, LBT simultaneously combines the content information and link structures into a unified latent topic model. The model is based on an assumption that the documents of source and target domains share some common topics from the point of view of both content information and link structure. By mapping both domains data into the latent topic spaces, LBT encodes the knowledge about domain commonality and difference as the shared topics with associated differential probabilities. The learned latent topics must be consistent with the source and target data, as well as content and link statistics. Then the shared topics act as the bridge to facilitate knowledge transfer from the source to the target domains. Experiments on different types of datasets show that our algorithm significantly improves the generalization performance of cross-domain document classification.  相似文献   

11.
余本功  王胡燕 《情报科学》2021,39(7):99-107
【目的/意义】对互联网产生的大量文本数据进行有效分类,提高文本处理效率,为企业用户决策提供建 议。【方法/过程】针对传统的词向量特征嵌入无法获取一词多义,特征稀疏、特征提取困难等问题,本文提出了一种 基于句子特征的多通道层次特征文本分类模型(SFM-DCNN)。首先,该模型通过Bert句向量建模,将特征嵌入从 传统的词特征嵌入升级为句特征嵌入,有效获取一词多义、词语位置及词间联系等语义特征。其次,通过构建多通 道深度卷积模型,将句特征从多层级来获取隐藏特征,获取更接近原语义的特征。【结果/结论】采用三种不同的数 据对模型进行验证分析,采用对比相关的分类方法,SFM-DCNN模型准确率较其他模型分类性能有所提高,这说 明该模型具有一定的借鉴意义。【创新/局限】基于文本分类中存在的一词多义、特征稀疏问题,创新性地利用Bert来 抽取全局语义信息,并结合多通道深层卷积来获取局部层次特征,但限于时间和设备条件,模型没有进行进一步的 预训练,实验数据集不够充分。  相似文献   

12.
In this era, the proliferating role of social media in our lives has popularized the posting of the short text. The short texts contain limited context with unique characteristics which makes them difficult to handle. Every day billions of short texts are produced in the form of tags, keywords, tweets, phone messages, messenger conversations social network posts, etc. The analysis of these short texts is imperative in the field of text mining and content analysis. The extraction of precise topics from large-scale short text documents is a critical and challenging task. The conventional approaches fail to obtain word co-occurrence patterns in topics due to the sparsity problem in short texts, such as text over the web, social media like Twitter, and news headlines. Therefore, in this paper, the sparsity problem is ameliorated by presenting a novel fuzzy topic modeling (FTM) approach for short text through fuzzy perspective. In this research, the local and global term frequencies are computed through a bag-of-words (BOW) model. To remove the negative impact of high dimensionality on the global term weighting, the principal component analysis is adopted; thereafter the fuzzy c-means algorithm is employed to retrieve the semantically relevant topics from the documents. The experiments are conducted over the three real-world short text datasets: the snippets dataset is in the category of small dataset whereas the other two datasets, Twitter and questions, are the bigger datasets. Experimental results show that the proposed approach discovered the topics more precisely and performed better as compared to other state-of-the-art baseline topic models such as GLTM, CSTM, LTM, LDA, Mix-gram, BTM, SATM, and DREx+LDA. The performance of FTM is also demonstrated in classification, clustering, topic coherence and execution time. FTM classification accuracy is 0.95, 0.94, 0.91, 0.89 and 0.87 on snippets dataset with 50, 75, 100, 125 and 200 number of topics. The classification accuracy of FTM on questions dataset is 0.73, 0.74, 0.70, 0.68 and 0.78 with 50, 75, 100, 125 and 200 number of topics. The classification accuracies of FTM on snippets and questions datasets are higher than state-of-the-art baseline topic models.  相似文献   

13.
周国韬  龚栩  邓胜利 《情报科学》2022,40(4):118-126
【目的/意义】研究旨在揭示社会化问答平台用户的养生健康信息需求分布特征,并深入探究需求产生的动 机及演化趋势。【方法/过程】本文以社会化问答平台“知乎”中13万条养生问答数据作为研究对象,通过LDA模型提 取需求话题,在离散时间序列基础上结合马斯洛需求层次理论对话题的关注度与关注热点进行演化分析。【结果/ 结论】用户养生信息需求涵盖 20个话题;相比传统健康信息需求对疾病的聚焦,养生健康信息需求在内容上更多 样,需求层次更高。需求的关注度演化上,安全需求与尊重需求成为热点,新冠疫情加强了用户对养生健康信息需 求的关注。话题间的内在联系上,用户对尊重需求话题的关注度以“商品化”的形式转移至安全需求话题。【创新/ 局限】本文首次聚焦养生健康信息需求,通过话题与演化分析细粒度地挖掘用户养生健康信息需求的变化趋势。 此外,本文数据源来自同一平台,后续研究可分析多平台用户的养生健康信息需求并对动机进行深化。  相似文献   

14.
【目的/意义】从海量自助餐用户评论数据中抽取有效关键词构建主题和主题词,协助商家了解用户口碑, 进而更好的改善餐饮行业的管理水平。【方法/过程】通过融合TF-IDF、TextRank和LMKE三种不同的关键词抽取 方法获取最优关键词,再对抽取的关键词进行语义聚类、主题识别、主题词挖掘和主题权重计算,最后在采集的美 团数据集上进行验证方法的有效性。【结果/结论】实验结果表明,三种关键词抽取方法的融合比单个关键词算法效 要好,文本评论聚类后的主题分别是:味道、菜品、环境、服务、价格,主题的重要程度依次是:味道 36.2%、服务 22.9%、价格15.1%、环境13.6%、菜品12.2%。实验结果证实,通过该方法能够有效识别和构建主题及主题词,并计算 出用户对于不同主题关注的重点内容,同时为餐饮行业主题及主题词挖掘和应用研究提供了一定的理论和技术基 础。【创新/局限】提出一种半监督语义聚类的主题识别、主题词构建和主题权重评估方法;不足之处在于本次实验 仅以武汉地区的美食自助餐评论为主,其构建的主题适用性范围有限。  相似文献   

15.
Sentiment analysis concerns the study of opinions expressed in a text. Due to the huge amount of reviews, sentiment analysis plays a basic role to extract significant information and overall sentiment orientation of reviews. In this paper, we present a deep-learning-based method to classify a user's opinion expressed in reviews (called RNSA).To the best of our knowledge, a deep learning-based method in which a unified feature set which is representative of word embedding, sentiment knowledge, sentiment shifter rules, statistical and linguistic knowledge, has not been thoroughly studied for a sentiment analysis. The RNSA employs the Recurrent Neural Network (RNN) which is composed by Long Short-Term Memory (LSTM) to take advantage of sequential processing and overcome several flaws in traditional methods, where order and information about the word are vanished. Furthermore, it uses sentiment knowledge, sentiment shifter rules and multiple strategies to overcome the following drawbacks: words with similar semantic context but opposite sentiment polarity; contextual polarity; sentence types; word coverage limit of an individual lexicon; word sense variations. To verify the effectiveness of our work, we conduct sentence-level sentiment classification on large-scale review datasets. We obtained encouraging result. Experimental results show that (1) feature vectors in terms of (a) statistical, linguistic and sentiment knowledge, (b) sentiment shifter rules and (c) word-embedding can improve the classification accuracy of sentence-level sentiment analysis; (2) our method that learns from this unified feature set can obtain significant performance than one that learns from a feature subset; (3) our neural model yields superior performance improvements in comparison with other well-known approaches in the literature.  相似文献   

16.
Aspect-based sentiment analysis technologies may be a very practical methodology for securities trading, commodity sales, movie rating websites, etc. Most recent studies adopt the recurrent neural network or attention-based neural network methods to infer aspect sentiment using opinion context terms and sentence dependency trees. However, due to a sentence often having multiple aspects sentiment representation, these models are hard to achieve satisfactory classification results. In this paper, we discuss these problems by encoding sentence syntax tree, words relations and opinion dictionary information in a unified framework. We called this method heterogeneous graph neural networks (Hete_GNNs). Firstly, we adopt the interactive aspect words and contexts to encode the sentence sequence information for parameter sharing. Then, we utilized a novel heterogeneous graph neural network for encoding these sentences’ syntax dependency tree, prior sentiment dictionary, and some part-of-speech tagging information for sentiment prediction. We perform the Hete_GNNs sentiment judgment and report the experiments on five domain datasets, and the results confirm that the heterogeneous context information can be better captured with heterogeneous graph neural networks. The improvement of the proposed method is demonstrated by aspect sentiment classification task comparison.  相似文献   

17.
【目的/意义】文本情感分类是近年来情报学领域的研究热点之一。已有研究大多关注针对目标文本的单 一情感分类。本文旨在探索基于深度学习的电商评论信息多刻面情感分类方法。【方法/过程】提出一种基于Atten⁃ tion-BiGRU-CNN的多刻面情感分类模型,通过BiGRU和CNN获取上下文信息和局部特征,利用Attention机制 优化隐层权重,以深度挖掘文本内隐语义和有效刻画多刻面情感。【结果/结论】在中文电商评论信息语料上的实验 表明,相较于其他神经网络模型,本文方法可有效提高多刻面情感分类的准确度。【创新/局限】进一步丰富多刻面 情感分类的方法途径,为深度挖掘电商评论信息以及优化产品和营销策略提供参考。本文语料主要基于单一类别 电商评论信息,聚焦可归纳刻面的情感分类,进一步的研究可面向类别多元化、需通过深度学习提取刻面信息的更 大规模语料展开。  相似文献   

18.
Aspect-based sentiment analysis aims to predict the sentiment polarities of specific targets in a given text. Recent researches show great interest in modeling the target and context with attention network to obtain more effective feature representation for sentiment classification task. However, the use of an average vector of target for computing the attention score for context is unfair. Besides, the interaction mechanism is simple thus need to be further improved. To solve the above problems, this paper first proposes a coattention mechanism which models both target-level and context-level attention alternatively so as to focus on those key words of targets to learn more effective context representation. On this basis, we implement a Coattention-LSTM network which learns nonlinear representations of context and target simultaneously and can extracts more effective sentiment feature from coattention mechanism. Further, a Coattention-MemNet network which adopts a multiple-hops coattention mechanism is proposed to improve the sentiment classification result. Finally, we propose a new location weighted function which considers the location information to enhance the performance of coattention mechanism. Extensive experiments on two public datasets demonstrate the effectiveness of all proposed methods, and our findings in the experiments provide new insight for future developments of using attention mechanism and deep neural network for aspect-based sentiment analysis.  相似文献   

19.
Nowadays, online word-of-mouth has an increasing impact on people's views and decisions, which has attracted many people's attention.The classification and sentiment analyse in online consumer reviews have attracted significant research concerns. In this thesis, we propose and implement a new method to study the extraction and classification of online dating services(ODS)’s comments. Different from traditional emotional analysis which mainly focuses on product attribution, we attempted to infer and extract the emotion concept of each emotional reviews by introducing social cognitive theory. In this study, we selected 4,300 comments with extremely negative/positive emotions published on dating websites as a sample, and used three machine learning algorithms to analyze emotions. When testing and comparing the efficiency of user's behavior research, we use various sentiment analysis, machine learning techniques and dictionary-based sentiment analysis. We found that the combination of machine learning and lexicon-based method can achieve higher accuracy than any type of sentiment analysis. This research will provide a new perspective for the task of user behavior.  相似文献   

20.
Social media data have recently attracted considerable attention as an emerging voice of the customer as it has rapidly become a channel for exchanging and storing customer-generated, large-scale, and unregulated voices about products. Although product planning studies using social media data have used systematic methods for product planning, their methods have limitations, such as the difficulty of identifying latent product features due to the use of only term-level analysis and insufficient consideration of opportunity potential analysis of the identified features. Therefore, an opportunity mining approach is proposed in this study to identify product opportunities based on topic modeling and sentiment analysis of social media data. For a multifunctional product, this approach can identify latent product topics discussed by product customers in social media using topic modeling, thereby quantifying the importance of each product topic. Next, the satisfaction level of each product topic is evaluated using sentiment analysis. Finally, the opportunity value and improvement direction of each product topic from a customer-centered view are identified by an opportunity algorithm based on product topics’ importance and satisfaction. We expect that our approach for product planning will contribute to the systematic identification of product opportunities from large-scale customer-generated social media data and will be used as a real-time monitoring tool for changing customer needs analysis in rapidly evolving product environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号