首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
We introduce Big Data Analytics (BDA) and Sentiment Analysis (SA) to the study of international negotiations, through an application to the case of the UK-EU Brexit negotiations and the use of Twitter user sentiment. We show that SA of tweets has potential as a real-time barometer of public sentiment towards negotiating outcomes to inform government decision-making. Despite the increasing need for information on collective preferences regarding possible negotiating outcomes, negotiators have been slow to capitalise on BDA. Through SA on a corpus of 13,018,367 tweets on defined Brexit hashtags, we illustrate how SA can provide a platform for decision-makers engaged in international negotiations to grasp collective preferences. We show that BDA and SA can enhance decision-making and strategy in public policy and negotiation contexts of the magnitude of Brexit. Our findings indicate that the preferred or least preferred Brexit outcomes could have been inferred by the emotions expressed by Twitter users. We argue that BDA can be a mechanism to map the different options available to decision-makers and bring insights to and inform their decision-making. Our work, thereby, proposes SA as part of the international negotiation toolbox to remedy for the existing informational gap between decision makers and citizens’ preferred outcomes.  相似文献   

2.
Although organizational factors related to big data analytics (BDA) and its performance have been studied extensively, the number of failed BDA projects continues to rise. The quality of BDA information is a commonly cited factor in explanations for such failures and could prove key to improving project performance. Using the resource-based view (RBV) lens, data analytics literature, business strategy control, and an empirical setup of two studies based on marketing and information technology managerial data, we draw on the dimensions of the balanced scorecard (BSC) as an integrating framework of BDA organizational factors. Specifically, we tested a model –from two different perspectives– that would explain information quality through analytical talent and organizations' data plan alignment. Results showed that both managers have a different understanding of what information quality is. The characteristics that make marketing a better informer of information quality are identified. In addition, hybrid (embedded) type analyst placements are seen to achieve better performance. Moreover, we add greater theoretical rigour by incorporating the moderating effect of the use of big data analytics in companies. Finally, the BSC provided a greater causal understanding of the resources and capabilities within a data strategy.  相似文献   

3.
Despite the popularity of big data and analytics (BDA) in industry, research regarding the economic value of BDA is still at an early stage. Little attention has been paid to quantifying the longitudinal impact of organizational BDA implementation on firm performance. Grounded in organizational learning theory, this study empirically demonstrates the impact of BDA implementation on organizational performance and how industry environment characteristics moderate the BDA-performance relationships. Using secondary data regarding BDA implementation from 2010 to February 2020, we find that BDA implementation has a significant impact on two types of business value creation: operational efficiency and business growth. Furthermore, the impact of BDA on operational efficiency is amplified in less dynamic and complex environments, while the BDA-business growth relationship is more pronounced in more dynamic, complex, and munificent environments. Collectively, this study provides a theory-centric understanding of BDA’s economic benefits. The findings offer insights to firms about what actual benefits BDA implementation may generate and how firms may align the use of BDA with the industry environments they are operating in.  相似文献   

4.
Five hundred million tweets are posted daily, making Twitter a major social media platform from which topical information on events can be extracted. These events are represented by three main dimensions: time, location and entity-related information. The focus of this paper is location, which is an essential dimension for geo-spatial applications, either when helping rescue operations during a disaster or when used for contextual recommendations. While the first type of application needs high recall, the second is more precision-oriented. This paper studies the recall/precision trade-off, combining different methods to extract locations. In the context of short posts, applying tools that have been developed for natural language is not sufficient given the nature of tweets which are generally too short to be linguistically correct. Also bearing in mind the high number of posts that need to be handled, we hypothesize that predicting whether a post contains a location or not could make the location extractors more focused and thus more effective. We introduce a model to predict whether a tweet contains a location or not and show that location prediction is a useful pre-processing step for location extraction. We define a number of new tweet features and we conduct an intensive evaluation. Our findings are that (1) combining existing location extraction tools is effective for precision-oriented or recall-oriented results, (2) enriching tweet representation is effective for predicting whether a tweet contains a location or not, (3) words appearing in a geography gazetteer and the occurrence of a preposition just before a proper noun are the two most important features for predicting the occurrence of a location in tweets, and (4) the accuracy of location extraction improves when it is possible to predict that there is a location in a tweet.  相似文献   

5.
Financial decisions are often based on classification models which are used to assign a set of observations into predefined groups. Different data classification models were developed to foresee the financial crisis of an organization using their historical data. One important step towards the development of accurate financial crisis prediction (FCP) model involves the selection of appropriate variables (features) which are relevant for the problems at hand. This is termed as feature selection problem which helps to improve the classification performance. This paper proposes an Ant Colony Optimization (ACO) based financial crisis prediction (FCP) model which incorporates two phases: ACO based feature selection (ACO-FS) algorithm and ACO based data classification (ACO-DC) algorithm. The proposed ACO-FCP model is validated using a set of five benchmark dataset includes both qualitative and quantitative. For feature selection design, the developed ACO-FS method is compared with three existing feature selection algorithms namely genetic algorithm (GA), Particle Swarm Optimization (PSO) algorithm and Grey Wolf Optimization (GWO) algorithm. In addition, a comparison of classification results is also made between ACO-DC and state of art methods. Experimental analysis shows that the ACO-FCP ensemble model is superior and more robust than its counterparts. In consequence, this study strongly recommends that the proposed ACO-FCP model is highly competitive than traditional and other artificial intelligence techniques.  相似文献   

6.
Big Data Analytics (BDA) is increasingly becoming a trending practice that generates an enormous amount of data and provides a new opportunity that is helpful in relevant decision-making. The developments in Big Data Analytics provide a new paradigm and solutions for big data sources, storage, and advanced analytics. The BDA provide a nuanced view of big data development, and insights on how it can truly create value for firm and customer. This article presents a comprehensive, well-informed examination, and realistic analysis of deploying big data analytics successfully in companies. It provides an overview of the architecture of BDA including six components, namely: (i) data generation, (ii) data acquisition, (iii) data storage, (iv) advanced data analytics, (v) data visualization, and (vi) decision-making for value-creation. In this paper, seven V's characteristics of BDA namely Volume, Velocity, Variety, Valence, Veracity, Variability, and Value are explored. The various big data analytics tools, techniques and technologies have been described. Furthermore, it presents a methodical analysis for the usage of Big Data Analytics in various applications such as agriculture, healthcare, cyber security, and smart city. This paper also highlights the previous research, challenges, current status, and future directions of big data analytics for various application platforms. This overview highlights three issues, namely (i) concepts, characteristics and processing paradigms of Big Data Analytics; (ii) the state-of-the-art framework for decision-making in BDA for companies to insight value-creation; and (iii) the current challenges of Big Data Analytics as well as possible future directions.  相似文献   

7.
8.
张晓丹 《情报杂志》2021,(1):184-188
[目的/意义]随着互联网数字资源的剧增,如何从海量数据中挖掘出有价值的信息成为数据挖掘领域研究的热点问题。文本大数据分类是这一领域的关键问题之一。随着深度学习的发展,使得基于深度学习的文本大数据分类成为可能。[方法/过程]针对近年来出现的图神经网络文本分类效率低的问题,提出改进的方法。利用文本、句子及关键词构建拓扑关系图和拓扑关系矩阵,利用马尔科夫链采样算法对每一层的节点进行采样,再利用多级降维方法实现特征降维,最后采用归纳式推理的方式实现文本分类。[结果/结论]为了测试该文所提方法的性能,利用常用的公用语料库和自行构建的NSTL科技期刊文献语料库对本文提出的方法进行实验,与当前常用的文本分类模型进行准确率和推理时间的比较。实验结果表明,所提出的方法可在保证文本及文献大数据分类准确率的前提下,有效提高分类的效率。  相似文献   

9.
Climate change has become one of the most significant crises of our time. Public opinion on climate change is influenced by social media platforms such as Twitter, often divided into believers and deniers. In this paper, we propose a framework to classify a tweet’s stance on climate change (denier/believer). Existing approaches to stance detection and classification of climate change tweets either have paid little attention to the characteristics of deniers’ tweets or often lack an appropriate architecture. However, the relevant literature reveals that the sentimental aspects and time perspective of climate change conversations on Twitter have a major impact on public attitudes and environmental orientation. Therefore, in our study, we focus on exploring the role of temporal orientation and sentiment analysis (auxiliary tasks) in detecting the attitude of tweets on climate change (main task). Our proposed framework STASY integrates word- and sentence-based feature encoders with the intra-task and shared-private attention frameworks to better encode the interactions between task-specific and shared features. We conducted our experiments on our novel curated climate change CLiCS dataset (2465 denier and 7235 believer tweets), two publicly available climate change datasets (ClimateICWSM-2022 and ClimateStance-2022), and two benchmark stance detection datasets (SemEval-2016 and COVID-19-Stance). Experiments show that our proposed approach improves stance detection performance (with an average improvement of 12.14% on our climate change dataset, 15.18% on ClimateICWSM-2022, 12.94% on ClimateStance-2022, 19.38% on SemEval-2016, and 35.01% on COVID-19-Stance in terms of average F1 scores) by benefiting from the auxiliary tasks compared to the baseline methods.  相似文献   

10.
Understanding how the application of big data analytics (BDA) generates business value is a persistent challenge in information systems (IS) research. Improving understanding of how BDA realizes business value requires unpacking theories to study the phenomenon. This study unpacks the task-technology fit (TTF) theory toward generating new and improved insights into the business value of BDA. Extant studies on TTF have mainly focused on traditional IT which is different from digital technologies like BDA that are malleable and dynamic. While TTF has primarily focused on how the technology meets task requirements, this study contends that tasks can also be structured to fit the functionality of technology. This study proposes a 2 × 2 matrix framework to explain how BDA and tasks interact. The framework indicates how the reconfigurability of tasks and the editability of BDA impact the fit between tasks and BDA. Future research should explore how the fit between tasks and BDA changes over time.  相似文献   

11.
One of the important problems in text classification is the high dimensionality of the feature space. Feature selection methods are used to reduce the dimensionality of the feature space by selecting the most valuable features for classification. Apart from reducing the dimensionality, feature selection methods have potential to improve text classifiers’ performance both in terms of accuracy and time. Furthermore, it helps to build simpler and as a result more comprehensible models. In this study we propose new methods for feature selection from textual data, called Meaning Based Feature Selection (MBFS) which is based on the Helmholtz principle from the Gestalt theory of human perception which is used in image processing. The proposed approaches are extensively evaluated by their effect on the classification performance of two well-known classifiers on several datasets and compared with several feature selection algorithms commonly used in text mining. Our results demonstrate the value of the MBFS methods in terms of classification accuracy and execution time.  相似文献   

12.
While geographical metadata referring to the originating locations of tweets provides valuable information to perform effective spatial analysis in social networks, scarcity of such geotagged tweets imposes limitations on their usability. In this work, we propose a content-based location prediction method for tweets by analyzing the geographical distribution of tweet texts using Kernel Density Estimation (KDE). The primary novelty of our work is to determine different settings of kernel functions for every term in tweets based on the location indicativeness of these terms. Our proposed method, which we call locality-adapted KDE, uses information-theoretic metrics and does not require any parameter tuning for these settings. As a further enhancement on the term-level distribution model, we describe an analysis of spatial point patterns in tweet texts in order to identify bigrams that exhibit significant deviation from the underlying unigram patterns. We present an expansion of feature space using the selected bigrams and show that it eventually yields further improvement in prediction accuracy of our locality-adapted KDE. We demonstrate that our expansion results in a limited increase in the size of feature space and it does not hinder online localization of tweets. The methods we propose rely purely on statistical approaches without requiring any language-specific setting. Experiments conducted on three tweet sets from different countries show that our proposed solution outperforms existing state-of-the-art techniques, yielding significantly more accurate predictions.  相似文献   

13.
Relation classification is one of the most fundamental tasks in the area of cross-media, which is essential for many practical applications such as information extraction, question&answer system, and knowledge base construction. In the cross-media semantic retrieval task, in order to meet the needs of cross-media uniform representation and semantic analysis, it is necessary to analyze the semantic potential relationship and construct semantic-related cross-media knowledge graph. The relationship classification technology is an important part of solving semantic correlation classification. Most of existing methods regard relation classification as a multi-classification task, without considering the correlation between different relationships. However, two relationships in the opposite directions are usually not independent of each other. Hence, this kind of relationships are easily confused in the traditional way. In order to solve the problem of confusing the relationships of the same semantic with different entity directions, this paper proposes a neural network fusing discrimination information for relation classification. In the proposed model, discrimination information is used to distinguish the relationship of the same semantic with different entity directions, the direction of entity in space is transformed into the direction of vector in mathematics by the method of entity vector subtraction, and the result of entity vector subtraction is used as discrimination information. The model consists of three modules: sentence representation module, relation discrimination module and discrimination fusion module. Moreover, two fusion methods are used for feature fusion. One is a Cascade-based feature fusion method, and another is a feature fusion method based on convolution neural network. In addition, this paper uses the new function added by cross-entropy function and deformed Max-Margin function as the loss function of the model. The experimental results show that the proposed discriminant feature is effective in distinguishing confusing relationships, and the proposed loss function can improve the performance of the model to a certain extent. Finally, the proposed model achieves 84.8% of the F1 value without any additional features or NLP analysis tools. Hence, the proposed method has a promising prospect of being incorporated in various cross-media systems.  相似文献   

14.
针对常见的基于PCA的人脸识别方法在识别过程中所遇到的计算量大、分类特征不佳等问题,提出了基于遗传算法的PCA+2DPCA的人脸识别方法,并通过实验,利用ORL人脸数据库验证了该方法的可行性。  相似文献   

15.
With the onset of COVID-19, the pandemic has aroused huge discussions on social media like Twitter, followed by many social media analyses concerning it. Despite such an abundance of studies, however, little work has been done on reactions from the public and officials on social networks and their associations, especially during the early outbreak stage. In this paper, a total of 9,259,861 COVID-19-related English tweets published from 31 December 2019 to 11 March 2020 are accumulated for exploring the participatory dynamics of public attention and news coverage during the early stage of the pandemic. An easy numeric data augmentation (ENDA) technique is proposed for generating new samples while preserving label validity. It attains superior performance on text classification tasks with deep models (BERT) than an easier data augmentation method. To demonstrate the efficacy of ENDA further, experiments and ablation studies have also been implemented on other benchmark datasets. The classification results of COVID-19 tweets show tweets peaks trigged by momentous events and a strong positive correlation between the daily number of personal narratives and news reports. We argue that there were three periods divided by the turning points on January 20 and February 23 and the low level of news coverage suggests the missed windows for government response in early January and February. Our study not only contributes to a deeper understanding of the dynamic patterns and relationships of public attention and news coverage on social media during the pandemic but also sheds light on early emergency management and government response on social media during global health crises.  相似文献   

16.
Graph convolutional network (GCN) is a powerful tool to process the graph data and has achieved satisfactory performance in the task of node classification. In general, GCN uses a fixed graph to guide the graph convolutional operation. However, the fixed graph from the original feature space may contain noises or outliers, which may degrade the effectiveness of GCN. To address this issue, in this paper, we propose a robust graph learning convolutional network (RGLCN). Specifically, we design a robust graph learning model based on the sparse constraint and strong connectivity constraint to achieve the smoothness of the graph learning. In addition, we introduce graph learning model into GCN to explore the representative information, aiming to learning a high-quality graph for the downstream task. Experiments on citation network datasets show that the proposed RGLCN outperforms the existing comparison methods with respect to the task of node classification.  相似文献   

17.
18.
While the use of big data tends to add value for business throughout the entire value chain, the integration of big data analytics (BDA) to the decision-making process remains a challenge. This study, based on a systematic literature review, thematic analysis and qualitative interview findings, proposes a set of six-steps to establish both rigor and relevance in the process of analytics-driven decision-making. Our findings illuminate the key steps in this decision process including problem definition, review of past findings, model development, data collection, data analysis as well as actions on insights in the context of service systems. Although findings have been discussed in a sequence of steps, the study identifies them as interdependent and iterative. The proposed six-step analytics-driven decision-making process, practical evidence from service systems, and future research agenda, provide altogether the foundation for future scholarly research and can serve as a step-wise guide for industry practitioners.  相似文献   

19.
余本功  王胡燕 《情报科学》2021,39(7):99-107
【目的/意义】对互联网产生的大量文本数据进行有效分类,提高文本处理效率,为企业用户决策提供建 议。【方法/过程】针对传统的词向量特征嵌入无法获取一词多义,特征稀疏、特征提取困难等问题,本文提出了一种 基于句子特征的多通道层次特征文本分类模型(SFM-DCNN)。首先,该模型通过Bert句向量建模,将特征嵌入从 传统的词特征嵌入升级为句特征嵌入,有效获取一词多义、词语位置及词间联系等语义特征。其次,通过构建多通 道深度卷积模型,将句特征从多层级来获取隐藏特征,获取更接近原语义的特征。【结果/结论】采用三种不同的数 据对模型进行验证分析,采用对比相关的分类方法,SFM-DCNN模型准确率较其他模型分类性能有所提高,这说 明该模型具有一定的借鉴意义。【创新/局限】基于文本分类中存在的一词多义、特征稀疏问题,创新性地利用Bert来 抽取全局语义信息,并结合多通道深层卷积来获取局部层次特征,但限于时间和设备条件,模型没有进行进一步的 预训练,实验数据集不够充分。  相似文献   

20.
Distant supervision (DS) has the advantage of automatically generating large amounts of labelled training data and has been widely used for relation extraction. However, there are usually many wrong labels in the automatically labelled data in distant supervision (Riedel, Yao, & McCallum, 2010). This paper presents a novel method to reduce the wrong labels. The proposed method uses the semantic Jaccard with word embedding to measure the semantic similarity between the relation phrase in the knowledge base and the dependency phrases between two entities in a sentence to filter the wrong labels. In the process of reducing wrong labels, the semantic Jaccard algorithm selects a core dependency phrase to represent the candidate relation in a sentence, which can capture features for relation classification and avoid the negative impact from irrelevant term sequences that previous neural network models of relation extraction often suffer. In the process of relation classification, the core dependency phrases are also used as the input of a convolutional neural network (CNN) for relation classification. The experimental results show that compared with the methods using original DS data, the methods using filtered DS data performed much better in relation extraction. It indicates that the semantic similarity based method is effective in reducing wrong labels. The relation extraction performance of the CNN model using the core dependency phrases as input is the best of all, which indicates that using the core dependency phrases as input of CNN is enough to capture the features for relation classification and could avoid negative impact from irrelevant terms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号