首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The aim in multi-label text classification is to assign a set of labels to a given document. Previous classifier-chain and sequence-to-sequence models have been shown to have a powerful ability to capture label correlations. However, they rely heavily on the label order, while labels in multi-label data are essentially an unordered set. The performance of these approaches is therefore highly variable depending on the order in which the labels are arranged. To avoid being dependent on label order, we design a reasoning-based algorithm named Multi-Label Reasoner (ML-Reasoner) for multi-label classification. ML-Reasoner employs a binary classifier to predict all labels simultaneously and applies a novel iterative reasoning mechanism to effectively utilize the inter-label information, where each instance of reasoning takes the previously predicted likelihoods for all labels as additional input. This approach is able to utilize information between labels, while avoiding the issue of label-order sensitivity. Extensive experiments demonstrate that our method outperforms state-of-the art approaches on the challenging AAPD dataset. We also apply our reasoning module to a variety of strong neural-based base models and show that it is able to boost performance significantly in each case.  相似文献   

2.
Multi-label text categorization refers to the problem of assigning each document to a subset of categories by means of multi-label learning algorithms. Unlike English and most other languages, the unavailability of Arabic benchmark datasets prevents evaluating multi-label learning algorithms for Arabic text categorization. As a result, only a few recent studies have dealt with multi-label Arabic text categorization on non-benchmark and inaccessible datasets. Therefore, this work aims to promote multi-label Arabic text categorization through (a) introducing “RTAnews”, a new benchmark dataset of multi-label Arabic news articles for text categorization and other supervised learning tasks. The benchmark is publicly available in several formats compatible with the existing multi-label learning tools, such as MEKA and Mulan. (b) Conducting an extensive comparison of most of the well-known multi-label learning algorithms for Arabic text categorization in order to have baseline results and show the effectiveness of these algorithms for Arabic text categorization on RTAnews. The evaluation involves four multi-label transformation-based algorithms: Binary Relevance, Classifier Chains, Calibrated Ranking by Pairwise Comparison and Label Powerset, with three base learners (Support Vector Machine, k-Nearest-Neighbors and Random Forest); and four adaptation-based algorithms (Multi-label kNN, Instance-Based Learning by Logistic Regression Multi-label, Binary Relevance kNN and RFBoost). The reported baseline results show that both RFBoost and Label Powerset with Support Vector Machine as base learner outperformed other compared algorithms. Results also demonstrated that adaptation-based algorithms are faster than transformation-based algorithms.  相似文献   

3.
Identifying petition expectation for government response plays an important role in government administrative service. Although some petition platforms allow citizens to label the petition expectation when they submit e-petitions, the misunderstanding and misselection of petition labels still has necessitated manual classification involved. Automatic petition expectation identification has faced challenges in poor context information, heavy noise and casual syntactic structure of the petition text. In this paper we propose a novel deep reinforcement learning based method for petition expectation (citizens’ demands for the level of government response) correction and identification named PecidRL. We collect a dataset from Message Board for Leaders, the largest official petition platform in China, containing 237,042 petitions. Firstly, we introduce a deep reinforcement learning framework to automatically correct the mislabeled and ambiguous labels of the petitions. Then, multi-view textual features, including word-level and document-level semantic features, sentiment features and different textual graph representations are extracted and integrated to enrich more auxiliary information. Furthermore, based on the corrected petitions, 19 novel petition expectation identification models are constructed by extending 11 popular machine learning models for petition expectation detection. Finally, comprehensive comparison and evaluation are conducted to select the final petition expectation identification model with the best performance. After performing correction by PecidRL, each metric on all extended petition expectation identification models improves by an average of 8.3% with the highest increase ratio reaching 14.2%. The optimal model is determined as Peti-SVM-bert with the highest accuracy 93.66%. We also analyze the petition expectation label variation of the dataset by using PecidRL. We derive that 16.9% of e-petitioners tend to exaggerate the urgency of their petitions to make the government pay high attention to their appeals and 4.4% of the petitions urgency are underestimated. This study has substantial academic and practical value in improving government efficiency. Additionally, a web-server is developed to facilitate government administrators and other researchers, which can be accessed at http://www.csbg-jlu.info/PecidRL/.  相似文献   

4.
Information residing in multiple modalities (e.g., text, image) of social media posts can jointly provide more comprehensive and clearer insights into an ongoing emergency. To identify information valuable for humanitarian aid from noisy multimodal data, we first clarify the categories of humanitarian information, and define a multi-label multimodal humanitarian information identification task, which can adapt to the label inconsistency issue caused by modality independence while maintaining the correlation between modalities. We proposed a Multimodal Humanitarian Information Identification Model that simultaneously captures the Correlation and Independence between modalities (CIMHIM). A tailor-made dataset containing 4,383 annotated text-image pairs was built to evaluate the effectiveness of our model. The experimental results show that CIMHIM outperforms both unimodal and multimodal baseline methods by at least 0.019 in macro-F1 and 0.022 in accuracy. The combination of OCR text, object-level features, and the decision rule based on label correlations enhances the overall performance of CIMHIM. Additional experiments on a similar dataset (CrisisMMD) also demonstrate the robustness of CIMHIM. The task, model, and dataset proposed in this study contribute to the practice of leveraging multimodal social media resources to support effective emergency response.  相似文献   

5.
In many important application domains, such as text categorization, scene classification, biomolecular analysis and medical diagnosis, examples are naturally associated with more than one class label, giving rise to multi-label classification problems. This fact has led, in recent years, to a substantial amount of research in multi-label classification. In order to evaluate and compare multi-label classifiers, researchers have adapted evaluation measures from the single-label paradigm, like Precision and Recall; and also have developed many different measures specifically for the multi-label paradigm, like Hamming Loss and Subset Accuracy. However, these evaluation measures have been used arbitrarily in multi-label classification experiments, without an objective analysis of correlation or bias. This can lead to misleading conclusions, as the experimental results may appear to favor a specific behavior depending on the subset of measures chosen. Also, as different papers in the area currently employ distinct subsets of measures, it is difficult to compare results across papers. In this work, we provide a thorough analysis of multi-label evaluation measures, and we give concrete suggestions for researchers to make an informed decision when choosing evaluation measures for multi-label classification.  相似文献   

6.
7.
程雅倩  黄玮  金晓祥  贾佳 《情报科学》2022,39(2):155-161
【目的/意义】由于自媒体平台中的多标签文本具有高维性和不平衡性,导致文本分类效果较差,因此通过 研究5G环境下高校图书馆自媒体平台多标签文本分类方法对解决该问题具有重要意义。【方法/过程】本文首先通 过对采集的5G环境下高校图书馆自媒体平台多标签文本进行预处理,包括无意义数据去除、文本分词以及去停用 词等;然后采用改进主成分分析方法进行多标签文本降维处理,利用向量空间模型实现文本平衡化处理;最后以处 理后的文本为基础,采用Adaboost和SVM两种算法构建文本分类器,实现多标签文本分类。【结果/结论】实验结果 表明,本文拟定的自媒体平台标签文本分类方法可以使汉明损失降低,F1值提高,多标签文本分类效果好,且耗时 较低,具有可靠性。【创新/局限】由于本研究中的数据集数量不够多,所以在测试和验证方面,得出的结果具有一定 局限性。因此在未来研究中期望利用更为丰富的数据库,对所设计的方法做出进一步的改进与创新。  相似文献   

8.
9.
10.
Text classification or categorization is the process of automatically tagging a textual document with most relevant labels or categories. When the number of labels is restricted to one, the task becomes single-label text categorization. However, the multi-label version is challenging. For Arabic language, both tasks (especially the latter one) become more challenging in the absence of large and free Arabic rich and rational datasets. Therefore, we introduce new rich and unbiased datasets for both the single-label (SANAD) as well as the multi-label (NADiA) Arabic text categorization tasks. Both corpora are made freely available to the research community on Arabic computational linguistics. Further, we present an extensive comparison of several deep learning (DL) models for Arabic text categorization in order to evaluate the effectiveness of such models on SANAD and NADiA. A unique characteristic of our proposed work, when compared to existing ones, is that it does not require a pre-processing phase and fully based on deep learning models. Besides, we studied the impact of utilizing word2vec embedding models to improve the performance of the classification tasks. Our experimental results showed solid performance of all models on SANAD corpus with a minimum accuracy of 91.18%, achieved by convolutional-GRU, and top performance of 96.94%, achieved by attention-GRU. As for NADiA, attention-GRU achieved the highest overall accuracy of 88.68% for a maximum subsets of 10 categories on “Masrawy” dataset.  相似文献   

11.
12.
Text summarization is a process of generating a brief version of documents by preserving the fundamental information of documents as much as possible. Although most of the text summarization research has been focused on supervised learning solutions, there are a few datasets indeed generated for summarization tasks, and most of the existing summarization datasets do not have human-generated goal summaries which are vital for both summary generation and evaluation. Therefore, a new dataset was presented for abstractive and extractive summarization tasks in this study. This dataset contains academic publications, the abstracts written by the authors, and extracts in two sizes, which were generated by human readers in this research. Then, the resulting extracts were evaluated to ensure the validity of the human extract production process. Moreover, the extractive summarization problem was reinvestigated on the proposed summarization dataset. Here the main point taken into account was to analyze the feature vector to generate more informative summaries. To that end, a comprehensive syntactic feature space was generated for the proposed dataset, and the impact of these features on the informativeness of the resulting summary was investigated. Besides, the summarization capability of semantic features was experienced by using GloVe and word2vec embeddings. Finally, the use of ensembled feature space, which corresponds to the joint use of syntactic and semantic features, was proposed on a long short-term memory-based neural network model. ROUGE metrics evaluated the model summaries, and the results of these evaluations showed that the use of the proposed ensemble feature space remarkably improved the single-use of syntactic or semantic features. Additionally, the resulting summaries of the proposed approach on ensembled features prominently outperformed or provided comparable performance than summaries obtained by state-of-the-art models for extractive summarization.  相似文献   

13.
Knowledge representation learning(KRL) transforms knowledge graph(KG) from symbol space to vector space. However, KRL under open world assumption(OWA) is deeply trapped in the dilemma of lack of labels due to difficulty or high cost in labeling. To address this problem, we propose KRL_MLCCL:Multi-Label Classification based on Contrastive Learning(CL) Knowledge Representation Learning method. Specifically, (1)we formalize a problem of solving true knowledge graph objects(KGOs) matchings(KGOMs) under the OWA in the original KGOM sample space(KGOMSS)(multi-label classification with one known true matching(positive-example)). (2)we solve the problem in the new KGOMSS, generated through augmenting the true matching according to CL’s idea(multi-label classification with multiple known true matching). (3)we score the true matchings based on hermitian inner product and softmax and minimize a negative logarithm likelihood loss to establish KRL_MLCCL model preliminarily. (4)we migrate the learned model back to the original KGOMSS to solve the true matching problem. We creatively design and apply a positive-example augmentation way of CL enabling KRL_MLCCL with back migration ability: “pulling KGOs in true matching close and pushing KGOs in false matching away”, which helps KRL out of the labels shortage dilemma faced in modeling. We also propose a negative-example noise filtering algorithm to enhance this ability. The open world entity prediction(OWEP) experiment on dataset FB15K-237-OWE shows that the performance of KRL_MLCCL is increased by 3% in Hits@10 and 1.32% in MRR compared with the state-of-the-art in the baselines. The experiments of OWEP in KG also show that KRL_MLCCL has a better back migration ability.  相似文献   

14.
【目的/意义】从开放政府数据主题的多个政策文本的语义挖掘出发,发现多个政策文本内容间的语义关 系,探索能降低人工干预,实现多政策文本协同性自动化分析的方法。【方法/过程】利用数据挖掘的关联规则算法 对经过预处理的开放政府数据政策文本进行语义挖掘,按照得到的有效强关联分析多政策文本间的协同性。【结 果/结论】以开放政府数据主题的多个政策文本为研究对象,确定置信度为 0.7,提升度大于 3时得到的有效强关联 规则数量较稳定;经过不同层次的政策文本关联规则分析,可以得到与人工分析基本吻合的结论,验证了该方法可 以应用于多政策文本语义协同性的定量研究。【创新/局限】采用数据挖掘中的关联规则算法完成数据政策多文本 的协同性知识推理研究,有效的实现了语义自动化计算的问题。实验中政策词表的完整性、数据预处理过程、参数 设定等环节都会对实验结果准确性有影响,需进一步降低人工干预影响。  相似文献   

15.
This paper studies how to learn accurate ranking functions from noisy training data for information retrieval. Most previous work on learning to rank assumes that the relevance labels in the training data are reliable. In reality, however, the labels usually contain noise due to the difficulties of relevance judgments and several other reasons. To tackle the problem, in this paper we propose a novel approach to learning to rank, based on a probabilistic graphical model. Considering that the observed label might be noisy, we introduce a new variable to indicate the true label of each instance. We then use a graphical model to capture the joint distribution of the true labels and observed labels given features of documents. The graphical model distinguishes the true labels from observed labels, and is specially designed for ranking in information retrieval. Therefore, it helps to learn a more accurate model from noisy training data. Experiments on a real dataset for web search show that the proposed approach can significantly outperform previous approaches.  相似文献   

16.
As a well-known multi-label classification method, the performance of ML-KNN may be affected by the uncertainty knowledge from samples. The rough set theory acts as an effective tool for data uncertainty analysis, which can identify the samples easy to cause misclassification in the learning process. In this paper, a hybrid framework by fusing rough sets with ML-KNN for multi-label learning is proposed, whose main idea is to depict easy misclassified samples by rough sets and to measure the discernibility of attributes for such samples. First, a rough set model titled NRFD_RS based on neighborhood relations and fuzzy decisions is proposed for multi-label data to find the heterogeneous sample pairs generated from the boundary regions of each label. Then, the weight of an attribute is defined by evaluating its discernibility to those heterogeneous sample pairs. Finally, a weighted HEOM distance is reconstructed and utilized to ML-KNN. Comprehensive experimental results with fourteen public multi-label data sets, including ten regular-scale and four larger-scale data sets, verify the effectiveness of the proposed framework relative to several state-of-the-art multi-label classification methods.  相似文献   

17.
18.
【目的/意义】为在线医疗问诊平台中的医生自动生成高质量标签,更好地服务于对医生资源的分类、检索和管理。【方法/过程】基于在线问诊文本信息,提出了结合时间周期特征与文本主题特征的医生标签自动生成算法。首先根据医生相关文本信息提取关键词生成候选标签,然后从患者问题文本和医生回答文本两个方面进行LDA主题模型训练,按时间周期挖掘出问题文本和回答文本的主题特征,对候选标签进行质量控制;最后经标签加权混合后得到最终的医生标签。【结果/结论】实验结果表明,该标签自动生成算法能够反映出医生标签生成的动态性,能够准确生成符合医生专业知识特征的高质量标签,具有较好的标签生成效果。  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号