首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
[目的/意义] 本研究对国内外政务社交媒体相关研究进行梳理,分析现有研究的特点及不足,以期为政务社交媒体研究提供参考和借鉴。[方法/过程] 通过文献调研,对2014-2019年的国内外政务社交媒体研究进行系统归纳,梳理现有研究的特点及不足,以期为政务社交媒体研究提供参考和借鉴。[结果/总结] 分析发现近年的政务社交媒体研究多为量化研究,分析方法包括内容分析法、统计分析法、社会网络分析法、机器学习方法,研究主题集中于政务社交媒体的运营管理、内容挖掘、应急管理、功能与作用。现有研究存在着数据来源较为单一、数据分析方法有待优化、缺乏系统规范的研究范式等方面的不足,并从政府和公众两个视角为政务社交媒体的未来研究提供思路。  相似文献   

2.
This article describes in-depth research on machine learning methods for sentiment analysis of Czech social media. Whereas in English, Chinese, or Spanish this field has a long history and evaluation datasets for various domains are widely available, in the case of the Czech language no systematic research has yet been conducted. We tackle this issue and establish a common ground for further research by providing a large human-annotated Czech social media corpus. Furthermore, we evaluate state-of-the-art supervised machine learning methods for sentiment analysis. We explore different pre-processing techniques and employ various features and classifiers. We also experiment with five different feature selection algorithms and investigate the influence of named entity recognition and preprocessing on sentiment classification performance. Moreover, in addition to our newly created social media dataset, we also report results for other popular domains, such as movie and product reviews. We believe that this article will not only extend the current sentiment analysis research to another family of languages, but will also encourage competition, potentially leading to the production of high-end commercial solutions.  相似文献   

3.
Semi-supervised document retrieval   总被引:2,自引:0,他引:2  
This paper proposes a new machine learning method for constructing ranking models in document retrieval. The method, which is referred to as SSRank, aims to use the advantages of both the traditional Information Retrieval (IR) methods and the supervised learning methods for IR proposed recently. The advantages include the use of limited amount of labeled data and rich model representation. To do so, the method adopts a semi-supervised learning framework in ranking model construction. Specifically, given a small number of labeled documents with respect to some queries, the method effectively labels the unlabeled documents for the queries. It then uses all the labeled data to train a machine learning model (in our case, Neural Network). In the data labeling, the method also makes use of a traditional IR model (in our case, BM25). A stopping criterion based on machine learning theory is given for the data labeling process. Experimental results on three benchmark datasets and one web search dataset indicate that SSRank consistently and almost always significantly outperforms the baseline methods (unsupervised and supervised learning methods), given the same amount of labeled data. This is because SSRank can effectively leverage the use of unlabeled data in learning.  相似文献   

4.
Big data generated by social media stands for a valuable source of information, which offers an excellent opportunity to mine valuable insights. Particularly, User-generated contents such as reviews, recommendations, and users’ behavior data are useful for supporting several marketing activities of many companies. Knowing what users are saying about the products they bought or the services they used through reviews in social media represents a key factor for making decisions. Sentiment analysis is one of the fundamental tasks in Natural Language Processing. Although deep learning for sentiment analysis has achieved great success and allowed several firms to analyze and extract relevant information from their textual data, but as the volume of data grows, a model that runs in a traditional environment cannot be effective, which implies the importance of efficient distributed deep learning models for social Big Data analytics. Besides, it is known that social media analysis is a complex process, which involves a set of complex tasks. Therefore, it is important to address the challenges and issues of social big data analytics and enhance the performance of deep learning techniques in terms of classification accuracy to obtain better decisions.In this paper, we propose an approach for sentiment analysis, which is devoted to adopting fastText with Recurrent neural network variants to represent textual data efficiently. Then, it employs the new representations to perform the classification task. Its main objective is to enhance the performance of well-known Recurrent Neural Network (RNN) variants in terms of classification accuracy and handle large scale data. In addition, we propose a distributed intelligent system for real-time social big data analytics. It is designed to ingest, store, process, index, and visualize the huge amount of information in real-time. The proposed system adopts distributed machine learning with our proposed method for enhancing decision-making processes. Extensive experiments conducted on two benchmark data sets demonstrate that our proposal for sentiment analysis outperforms well-known distributed recurrent neural network variants (i.e., Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), and Gated Recurrent Unit (GRU)). Specifically, we tested the efficiency of our approach using the three different deep learning models. The results show that our proposed approach is able to enhance the performance of the three models. The current work can provide several benefits for researchers and practitioners who want to collect, handle, analyze and visualize several sources of information in real-time. Also, it can contribute to a better understanding of public opinion and user behaviors using our proposed system with the improved variants of the most powerful distributed deep learning and machine learning algorithms. Furthermore, it is able to increase the classification accuracy of several existing works based on RNN models for sentiment analysis.  相似文献   

5.
Interest in real-time syndromic surveillance based on social media data has greatly increased in recent years. The ability to detect disease outbreaks earlier than traditional methods would be highly useful for public health officials. This paper describes a software system which is built upon recent developments in machine learning and data processing to achieve this goal. The system is built from reusable modules integrated into data processing pipelines that are easily deployable and configurable. It applies deep learning to the problem of classifying health-related tweets and is able to do so with high accuracy. It has the capability to detect illness outbreaks from Twitter data and then to build up and display information about these outbreaks, including relevant news articles, to provide situational awareness. It also provides nowcasting functionality of current disease levels from previous clinical data combined with Twitter data.The preliminary results are promising, with the system being able to detect outbreaks of influenza-like illness symptoms which could then be confirmed by existing official sources. The Nowcasting module shows that using social media data can improve prediction for multiple diseases over simply using traditional data sources.  相似文献   

6.
谢海涛  肖倩 《现代情报》2019,39(9):28-40
[目的/意义]对社交媒体中热门新闻的及时识别,有助于加速正面资讯的投送或抑制负面资讯的扩散。当前,基于自然语言处理的传统识别方法正面临社交媒体新生态的挑战:大量新闻内容以图片、音视频形式存在,缺乏用于语义及情感分析的文本。[方法/过程]对此,本文首先将社交网络划分为众多社群,并按其层次结构组织为贝叶斯网络。接着,面向社群构建基于卷积神经网络的热门新闻识别模型,模型综合考虑新闻传播的宏观统计规律及微观传播过程,以提取社群内热门新闻传播的特征。最后,利用贝叶斯推理并结合局部性的模型识别结果进行全局性热度预测。[结果/结论]实验表明,本方法在语义缺失场景下可有效识别热门新闻,其准确度强于基于语义信息的机器学习方法,模型具有良好的时效性、可扩展性和适用性。该研究有助于社交媒体的监管机构及时识别出各类不含语义信息且迅速扩散的热点内容。  相似文献   

7.
The research field of crisis informatics examines, amongst others, the potentials and barriers of social media use during disasters and emergencies. Social media allow emergency services to receive valuable information (e.g., eyewitness reports, pictures, or videos) from social media. However, the vast amount of data generated during large-scale incidents can lead to issue of information overload. Research indicates that supervised machine learning techniques are suitable for identifying relevant messages and filter out irrelevant messages, thus mitigating information overload. Still, they require a considerable amount of labeled data, clear criteria for relevance classification, a usable interface to facilitate the labeling process and a mechanism to rapidly deploy retrained classifiers. To overcome these issues, we present (1) a system for social media monitoring, analysis and relevance classification, (2) abstract and precise criteria for relevance classification in social media during disasters and emergencies, (3) the evaluation of a well-performing Random Forest algorithm for relevance classification incorporating metadata from social media into a batch learning approach (e.g., 91.28%/89.19% accuracy, 98.3%/89.6% precision and 80.4%/87.5% recall with a fast training time with feature subset selection on the European floods/BASF SE incident datasets), as well as (4) an approach and preliminary evaluation for relevance classification including active, incremental and online learning to reduce the amount of required labeled data and to correct misclassifications of the algorithm by feedback classification. Using the latter approach, we achieved a well-performing classifier based on the European floods dataset by only requiring a quarter of labeled data compared to the traditional batch learning approach. Despite a lesser effect on the BASF SE incident dataset, still a substantial improvement could be determined.  相似文献   

8.
With the popularity of social platforms such as Sina Weibo, Tweet, etc., a large number of public events spread rapidly on social networks and huge amount of textual data are generated along with the discussion of netizens. Social text clustering has become one of the most critical methods to help people find relevant information and provides quality data for subsequent timely public opinion analysis. Most existing neural clustering methods rely on manual labeling of training sets and take a long time in the learning process. Due to the explosiveness and the large-scale of social media data, it is a challenge for social text data clustering to satisfy the timeliness demand of users. This paper proposes a novel unsupervised event-oriented graph clustering framework (EGC), which can achieve efficient clustering performance on large-scale datasets with less time overhead and does not require any labeled data. Specifically, EGC first mines the potential relations existing in social text data and transforms the textual data of social media into an event-oriented graph by taking advantage of graph structure for complex relations representation. Secondly, EGC uses a keyword-based local importance method to accurately measure the weights of relations in event-oriented graph. Finally, a bidirectional depth-first clustering algorithm based on the interrelations is proposed to cluster the nodes in event-oriented graph. By projecting the relations of the graph into a smaller domain, EGC achieves fast convergence. The experimental results show that the clustering performance of EGC on the Weibo dataset reaches 0.926 (NMI), 0.926 (AMI), 0.866 (ARI), which are 13%–30% higher than other clustering methods. In addition, the average query time of EGC clustered data is 16.7ms, which is 90% less than the original data.  相似文献   

9.
Modern companies generate value by digitalizing their services and products. Knowing what customers are saying about the firm through reviews in social media content constitutes a key factor to succeed in the big data era. However, social media data analysis is a complex discipline due to the subjectivity in text review and the additional features in raw data. Some frameworks proposed in the existing literature involve many steps that thereby increase their complexity. A two-stage framework to tackle this problem is proposed: the first stage is focused on data preparation and finding an optimal machine learning model for this data; the second stage relies on established layers of big data architectures focused on getting an outcome of data by taking most of the machine learning model of stage one. Thus, a first stage is proposed to analyze big and small datasets in a non-big data environment, whereas the second stage analyzes big datasets by applying the first stage machine learning model of. Then, a study case is presented for the first stage of the framework to analyze reviews of hotel-related businesses. Several machine learning algorithms were trained for two, three and five classes, with the best results being found for binary classification.  相似文献   

10.
Social media data have recently attracted considerable attention as an emerging voice of the customer as it has rapidly become a channel for exchanging and storing customer-generated, large-scale, and unregulated voices about products. Although product planning studies using social media data have used systematic methods for product planning, their methods have limitations, such as the difficulty of identifying latent product features due to the use of only term-level analysis and insufficient consideration of opportunity potential analysis of the identified features. Therefore, an opportunity mining approach is proposed in this study to identify product opportunities based on topic modeling and sentiment analysis of social media data. For a multifunctional product, this approach can identify latent product topics discussed by product customers in social media using topic modeling, thereby quantifying the importance of each product topic. Next, the satisfaction level of each product topic is evaluated using sentiment analysis. Finally, the opportunity value and improvement direction of each product topic from a customer-centered view are identified by an opportunity algorithm based on product topics’ importance and satisfaction. We expect that our approach for product planning will contribute to the systematic identification of product opportunities from large-scale customer-generated social media data and will be used as a real-time monitoring tool for changing customer needs analysis in rapidly evolving product environments.  相似文献   

11.
李静  徐路路 《现代情报》2019,39(4):23-33
[目的/意义]细粒度分析学科领域热点主题发展脉络并对利用机器学习算法对未来发展趋势进行准确预测研究。[方法/过程]提出一种基于机器学习算法的研究热点趋势预测方法与分析框架,以基因工程领域为例利用主题概率模型识别WOS核心集中论文摘要数据研究热点主题并进行主题演化关联构建,然后选取BP神经网络、支持向量机及LSTM模型等3种典型机器学习算法进行预测分析,最后利用RE指标和精准度指标评价机器学习算法预测效果并对基因工程领域在医药卫生、农业食品等方面研究趋势进行分析。[结果/结论]实验表明基于LSTM模型对热点主题未来发展趋势预测准确度最高,支持向量机预测效果次之,BP神经网络预测效果较差且预测稳定性不足,同时结合专家咨询和文献调研表明本文方法可快速识别基因领域研究主题及发展趋势,可为我国学科领域大势研判和架构调整提供决策支持和参考。  相似文献   

12.
Most of the previous studies on the semantic analysis of social media feeds have not considered the issue of ambiguity that is associated with slangs, abbreviations, and acronyms that are embedded in social media posts. These noisy terms have implicit meanings and form part of the rich semantic context that must be analysed to gain complete insights from social media feeds. This paper proposes an improved framework for pre-processing of social media feeds for better performance. To do this, the use of an integrated knowledge base (ikb) which comprises a local knowledge source (Naijalingo), urban dictionary and internet slang was combined with the adapted Lesk algorithm to facilitate semantic analysis of social media feeds. Experimental results showed that the proposed approach performed better than existing methods when it was tested on three machine learning models, which are support vector machines, multilayer perceptron, and convolutional neural networks. The framework had an accuracy of 94.07% on a standardized dataset, and 99.78% on localised dataset when used to extract sentiments from tweets. The improved performance on the localised dataset reveals the advantage of integrating the use of local knowledge sources into the process of analysing social media feeds particularly in interpreting slangs/acronyms/abbreviations that have contextually rooted meanings.  相似文献   

13.
陈震  王静茹 《情报科学》2020,38(4):51-56
【目的/意义】目前网络舆情事件与社会稳定密切相关,其中定量计算方法在网络舆情事件分析中占有重要地位。【方法/过程】本文提出了一种基于贝叶斯网络(Bayesian Network下文简称BN)分析网络舆情事件趋势的方法。先根据先验知识和专家指导设计BN拓扑结构;再利用EM算法推算条件概率表;最后通过训练集和测试集的方法检验BN的有效性。【结果/结论】本文以随机抽取的2018年100件网络舆情事件为数据源进行实验,结果表明本文设计的BN在预测网络舆情事件趋势方面是可靠的。这为基于BN处理网络舆情事件提供了一定理论依据。  相似文献   

14.
The increase in acceptability and popularity of social media has made extracting information from the data generated on social media an emerging field of research. An important branch of this field is predicting future events using social media data. This paper is focused on predicting box-office revenue of a movie by mining people's intention to purchase a movie ticket, termed purchase intention, from trailer reviews. Movie revenue prediction is important due to risks involved in movie production despite the high cost involved in the production. Previous studies in this domain focus on the use of twitter data and IMDB reviews for the prediction of movies that have already been released. In this paper, we build a model for movie revenue prediction prior to the movie's release using YouTube trailer reviews. Our model consists of novel methods of calculating purchase intention, positive-to-negative sentiment ratio, and like-to-dislike ratio for movie revenue prediction. Our experimental results prove the superiority of our approach compared to three baseline approaches and achieved a relative absolute error of 29.65%.  相似文献   

15.
Social networks have grown into a widespread form of communication that allows a large number of users to participate in conversations and consume information at any time. The casual nature of social media allows for nonstandard terminology, some of which may be considered rude and derogatory. As a result, a significant portion of social media users is found to express disrespectful language. This problem may intensify in certain developing countries where young children are granted unsupervised access to social media platforms. Furthermore, the sheer amount of social media data generated daily by millions of users makes it impractical for humans to monitor and regulate inappropriate content. If adolescents are exposed to these harmful language patterns without adequate supervision, they may feel obliged to adopt them. In addition, unrestricted aggression in online forums may result in cyberbullying and other dreadful occurrences. While computational linguistics research has addressed the difficulty of detecting abusive dialogues, issues remain unanswered for low-resource languages with little annotated data, leading the majority of supervised techniques to perform poorly. In addition, social media content is often presented in complex, context-rich formats that encourage creative user involvement. Therefore, we propose to improve the performance of abusive language detection and classification in a low-resource setting, using both the abundant unlabeled data and the context features via the co-training protocol that enables two machine learning models, each learning from an orthogonal set of features, to teach each other, resulting in an overall performance improvement. Empirical results reveal that our proposed framework achieves F1 values of 0.922 and 0.827, surpassing the state-of-the-art baselines by 3.32% and 45.85% for binary and fine-grained classification tasks, respectively. In addition to proving the efficacy of co-training in a low-resource situation for abusive language detection and classification tasks, the findings shed light on several opportunities to use unlabeled data and contextual characteristics of social networks in a variety of social computing applications.  相似文献   

16.
邵雷  石峰 《情报杂志》2022,41(2):65-70,56
[研究目的]域外势力操控网络社交媒体,影响网络环境和政治生态,挖掘分析域外势力操纵网络社交媒体行为,据此提出科学的应对策略,以提升舆情事件处置能力,维护国家安全。[研究方法]采用可视化分析工具对开源情报进行网络拓扑分析、层级分析、中心性分析、聚类分析和结构化特征分析,剖析域外势力操控网络社交媒体的表现形式和手段路径。采用大数据挖掘技术识别判断政治机器人操控社交账号的行为策略和特征。[研究结论]域外势力利用虚假公司注册网络社交媒体账号并操控政治机器人开展虚假舆论宣传。应利用大数据建模建立关键信息数据库,监测管控高危账号;利用人工智能技术有效识别和应对政治机器人。  相似文献   

17.
Social media has become the most popular platform for free speech. This freedom of speech has given opportunities to the oppressed to raise their voice against injustices, but on the other hand, this has led to a disturbing trend of spreading hateful content of various kinds. Pakistan has been dealing with the issue of sectarian and ethnic violence for the last three decades and now due to freedom of speech, there is a growing trend of disturbing content about religion, sect, and ethnicity on social media. This necessitates the need for an automated system for the detection of controversial content on social media in Urdu which is the national language of Pakistan. The biggest hurdle that has thwarted the Urdu language processing is the scarcity of language resources, annotated datasets, and pretrained language models. In this study, we have addressed the problem of detecting Interfaith, Sectarian, and Ethnic hatred on social media in Urdu language using machine learning and deep learning techniques. In particular, we have: (1) developed and presented guidelines for annotating Urdu text with appropriate labels for two levels of classification, (2) developed a large dataset of 21,759 tweets using the developed guidelines and made it publicly available, and (3) conducted experiments to compare the performance of eight supervised machine learning and deep learning techniques, for the automated identification of hateful content. In the first step, experiments are performed for the hateful content detection as a binary classification task, and in the second step, the classification of Interfaith, Sectarian and Ethnic hatred detection is performed as a multiclass classification task. Overall, Bidirectional Encoder Representation from Transformers (BERT) proved to be the most effective technique for hateful content identification in Urdu tweets.  相似文献   

18.
With the rapid development in mobile computing and Web technologies, online hate speech has been increasingly spread in social network platforms since it's easy to post any opinions. Previous studies confirm that exposure to online hate speech has serious offline consequences to historically deprived communities. Thus, research on automated hate speech detection has attracted much attention. However, the role of social networks in identifying hate-related vulnerable community is not well investigated. Hate speech can affect all population groups, but some are more vulnerable to its impact than others. For example, for ethnic groups whose languages have few computational resources, it is a challenge to automatically collect and process online texts, not to mention automatic hate speech detection on social media. In this paper, we propose a hate speech detection approach to identify hatred against vulnerable minority groups on social media. Firstly, in Spark distributed processing framework, posts are automatically collected and pre-processed, and features are extracted using word n-grams and word embedding techniques such as Word2Vec. Secondly, deep learning algorithms for classification such as Gated Recurrent Unit (GRU), a variety of Recurrent Neural Networks (RNNs), are used for hate speech detection. Finally, hate words are clustered with methods such as Word2Vec to predict the potential target ethnic group for hatred. In our experiments, we use Amharic language in Ethiopia as an example. Since there was no publicly available dataset for Amharic texts, we crawled Facebook pages to prepare the corpus. Since data annotation could be biased by culture, we recruit annotators from different cultural backgrounds and achieved better inter-annotator agreement. In our experimental results, feature extraction using word embedding techniques such as Word2Vec performs better in both classical and deep learning-based classification algorithms for hate speech detection, among which GRU achieves the best result. Our proposed approach can successfully identify the Tigre ethnic group as the highly vulnerable community in terms of hatred compared with Amhara and Oromo. As a result, hatred vulnerable group identification is vital to protect them by applying automatic hate speech detection model to remove contents that aggravate psychological harm and physical conflicts. This can also encourage the way towards the development of policies, strategies, and tools to empower and protect vulnerable communities.  相似文献   

19.
When public health emergencies occur, a large amount of low-credibility information is widely disseminated by social bots, and public sentiment is easily manipulated by social bots, which may pose a potential threat to the public opinion ecology of social media. Therefore, exploring how social bots affect the mechanism of information diffusion in social networks is a key strategy for network governance. This study combines machine learning methods and causal regression methods to explore how social bots influence information diffusion in social networks with theoretical support. Specifically, combining stakeholder perspective and emotional contagion theory, we proposed several questions and hypotheses to investigate the influence of social bots. Then, the study obtained 144,314 pieces of public opinion data related to COVID-19 in J city from March 1, 2022, to April 18, 2022, on Weibo, and selected 185,782 pieces of data related to the outbreak of COVID-19 in X city from December 9, 2021, to January 10, 2022, as supplement and verification. A comparative analysis of different data sets revealed the following findings. Firstly, through the STM topic model, it is found that some topics posted by social bots are significantly different from those posted by humans, and social bots play an important role in certain topics. Secondly, based on regression analysis, the study found that social bots tend to transmit information with negative sentiments more than positive sentiments. Thirdly, the study verifies the specific distribution of social bots in sentimental transmission through network analysis and finds that social bots are weaker than human users in the ability to spread negative sentiments. Finally, the Granger causality test is used to confirm that the sentiments of humans and bots can predict each other in time series. The results provide practical suggestions for emergency management under sudden public opinion and provide a useful reference for the identification and analysis of social bots, which is conducive to the maintenance of network security and the stability of social order.  相似文献   

20.
[目的/意义]总结了基于在线社交媒体数据的广度学习工作研究进展,从情报学的视角分析了广度学习的应用展望及未来发展趋势。[方法/过程]利用文献统计分析方法,重点分析了广度学习技术在网络嵌入、链路预测、社区检测等在线社交网络分析领域的应用现状。[结果/结论]广度学习可以将多个不同种类的大型异构数据源融合在一起,设计并使用一套统一的分析方法来跨越这些融合的数据源执行协同数据挖掘任务。广度学习在异构社交网络分析中的这些成功应用为其在情报学领域中的研究奠定了理论基础和技术支持,将会有更广泛更深远的研究成果出现。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号