首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Log parsing is a critical task that converts unstructured raw logs into structured data for downstream tasks. Existing methods often rely on manual string-matching rules to extract template tokens, leading to lower adaptability on different log datasets. To address this issue, we propose an automated log parsing method, PVE, which leverages Variational Auto-Encoder (VAE) to build a semi-supervised model for categorizing log tokens. Inspired by the observation that log template tokens often consist of words, we choose common words and their combinations to serve as training data to enhance the diversity of structure features of template tokens. Specifically, PVE constructs two types of embedding vectors, the sum embedding and the n-gram embedding, for each word and word combination. The structure features of template tokens can be learned by training VAE on these embeddings. PVE categorizes a token as a template token if it is similar to the training data when log parsing. To improve efficiency, we use the average similarity between token embedding and VAE samples to determine the token type, rather than the reconstruction error. Evaluations on 16 real-world log datasets demonstrate that our method has an average accuracy of 0.878, which outperforms comparison methods in terms of parsing accuracy and adaptability.  相似文献   

2.
Traditional Information Retrieval (IR) models assume that the index terms of queries and documents are statistically independent of each other, which is intuitively wrong. This paper proposes the incorporation of the lexical and syntactic knowledge generated by a POS-tagger and a syntactic Chunker into traditional IR similarity measures for including this dependency information between terms. Our proposal is based on theories of discourse structure by means of the segmentation of documents and queries into sentences and entities. Therefore, we measure dependencies between entities instead of between terms. Moreover, we handle discourse references for each entity. It has been evaluated on Spanish and English corpora as well as on Question Answering tasks obtaining significant increases.  相似文献   

3.
This paper is an interim report on our efforts at NIST to construct an information discovery tool through the fusion of hypertext and information retrieval (IR) technologies. The tool works by parsing a contiguous document base into smaller documents and inserting semantic links between these documents using document–document similarity measures based on IR techniques. The focus of the paper is a case study in which domain experts evaluate the utility of the tool in the performance of information discovery tasks on a large, dynamic procedural manual. The results of the case study are discussed, and their implications for the design of large-scale automatic hypertext generation systems are described.  相似文献   

4.
How to parse the human image to obtain the text label corresponding to the human body is a critical task for human-computer interaction. Although previous methods have significantly improved the parsing performance, the problem of parsing confusion and tiny target missing remains unresolved, which leads to errors and incomplete inference accordingly. Targeting at these drawbacks, we fuse semantic and spatial features to mine the human body information based on the Dual Pyramid Unit convolutional neural network, named as DPUNet. DPUNet is composed of Context Pyramid Unit (CPU) and Spatial Pyramid Unit (SPU). Firstly, we design the CPU to aggregate the local to global semantic information, which exports the semantic feature for eliminating the semantic confusion. To capture the tiny targets for preventing the details from missing, the SPU is proposed to incorporate the multi-scale spatial information and output the spatial feature. Finally, the features of two complementary units are fused for accurate and complete human parsing results. Our approach achieves more excellent performance than the state-of-the-art methods on single human and multiple human parsing datasets. Meanwhile, the proposed framework is efficient with a fast speed of 41.2fps.  相似文献   

5.
This paper focuses on extracting temporal and parent–child relationships between news events in social news. Previous methods have proved that syntactic features are valid. However, most previous methods directly use the static outcomes parsed by syntactic parsing tools, but task-irrelevant or erroneous parses will inevitably degrade the performance of the model. In addition, many implicit higher-order connections that are directly related and critical to tasks are not explicitly exploited. In this paper, we propose a novel syntax-based dynamic latent graph model (SDLG) for this task. Specifically, we first apply a syntactic type-enhanced attention mechanism to assign different weights to different connections in the parsing results, which helps to filter out noisy connections and better fuse the information in the syntactic structures. Next, we introduce a dynamic event pair-aware induction graph to mine the task-related latent connections. It constructs a potential attention matrix to complement and correct the supervised syntactic features, using the semantics of the event pairs as a guide. Finally, the latent graph, together with the syntactic information, is fed into the graph convolutional network to obtain an improved representation of the event to complete relational reasoning. We have conducted extensive experiments on four public benchmarks, MATRES, TCR, HiEve and TB-Dense. The results show that our model outperforms the state-of-the-art model by 0.4%, 1.5%, 3.0% and 1.3% in F1 scores on the four datasets, respectively. Finally, we provide detailed analyses to show the effectiveness of each proposed component.  相似文献   

6.
7.
Within the context of Information Extraction (IE), relation extraction is oriented towards identifying a variety of relation phrases and their arguments in arbitrary sentences. In this paper, we present a clause-based framework for information extraction in textual documents. Our framework focuses on two important challenges in information extraction: 1) Open Information Extraction and (OIE), and 2) Relation Extraction (RE). In the plethora of research that focus on the use of syntactic and dependency parsing for the purposes of detecting relations, there has been increasing evidence of incoherent and uninformative extractions. The extracted relations may even be erroneous at times and fail to provide a meaningful interpretation. In our work, we use the English clause structure and clause types in an effort to generate propositions that can be deemed as extractable relations. Moreover, we propose refinements to the grammatical structure of syntactic and dependency parsing that help reduce the number of incoherent and uninformative extractions from clauses. In our experiments both in the open information extraction and relation extraction domains, we carefully evaluate our system on various benchmark datasets and compare the performance of our work against existing state-of-the-art information extraction systems. Our work shows improved performance compared to the state-of-the-art techniques.  相似文献   

8.
This paper presents a probabilistic information retrieval framework in which the retrieval problem is formally treated as a statistical decision problem. In this framework, queries and documents are modeled using statistical language models, user preferences are modeled through loss functions, and retrieval is cast as a risk minimization problem. We discuss how this framework can unify existing retrieval models and accommodate systematic development of new retrieval models. As an example of using the framework to model non-traditional retrieval problems, we derive retrieval models for subtopic retrieval, which is concerned with retrieving documents to cover many different subtopics of a general query topic. These new models differ from traditional retrieval models in that they relax the traditional assumption of independent relevance of documents.  相似文献   

9.
吴天乐 《科教文汇》2014,(2):109-109,111
多模态话语分析认为话语分析不应只以自然语言为研究对象.还应关注图像、颜色、声音和动作等其他构成意义的符号系统。Kress&Leeuwen根据Halliday系统功能语法中的三大纯理功能,提出了第一个系统全面的视觉语法分析框架,从再现意义、互动意义和构图意义讨论多模态语篇图像的意义表达方式。通过深入分析可以得出结论:视觉语法对分析多模态语篇的图像意义具有很强的可行性和可操作性,它不仅拓展了多模态话语分析的领域,提高读者的话语识读能力,而且还可以帮助设计者科学地选择和协调各种模态,从而达到最佳设计效果。  相似文献   

10.
We study the selection of transfer languages for different Natural Language Processing tasks, specifically sentiment analysis, named entity recognition and dependency parsing. In order to select an optimal transfer language, we propose to utilize different linguistic similarity metrics to measure the distance between languages and make the choice of transfer language based on this information instead of relying on intuition. We demonstrate that linguistic similarity correlates with cross-lingual transfer performance for all of the proposed tasks. We also show that there is a statistically significant difference in choosing the optimal language as the transfer source instead of English. This allows us to select a more suitable transfer language which can be used to better leverage knowledge from high-resource languages in order to improve the performance of language applications lacking data. For the study, we used datasets from eight different languages from three language families.  相似文献   

11.
Numerous feature-based models have been recently proposed by the information retrieval community. The capability of features to express different relevance facets (query- or document-dependent) can explain such a success story. Such models are most of the time supervised, thus requiring a learning phase. To leverage the advantages of feature-based representations of documents, we propose TournaRank, an unsupervised approach inspired by real-life game and sport competition principles. Documents compete against each other in tournaments using features as evidences of relevance. Tournaments are modeled as a sequence of matches, which involve pairs of documents playing in turn their features. Once a tournament is ended, documents are ranked according to their number of won matches during the tournament. This principle is generic since it can be applied to any collection type. It also provides great flexibility since different alternatives can be considered by changing the tournament type, the match rules, the feature set, or the strategies adopted by documents during matches. TournaRank was experimented on several collections to evaluate our model in different contexts and to compare it with related approaches such as Learning To Rank and fusion ones: the TREC Robust2004 collection for homogeneous documents, the TREC Web2014 (ClueWeb12) collection for heterogeneous web documents, and the LETOR3.0 collection for comparison with supervised feature-based models.  相似文献   

12.
Multi-document discourse parsing aims to automatically identify the relations among textual spans from different texts on the same topic. Recently, with the growing amount of information and the emergence of new technologies that deal with many sources of information, more precise and efficient parsing techniques are required. The most relevant theory to multi-document relationship, Cross-document Structure Theory (CST), has been used for parsing purposes before, though the results had not been satisfactory. CST has received many critics because of its subjectivity, which may lead to low annotation agreement and, consequently, to poor parsing performance. In this work, we propose a refinement of the original CST, which consists in (i) formalizing the relationship definitions, (ii) pruning and combining some relations based on their meaning, and (iii) organizing the relations in a hierarchical structure. The hypothesis for this refinement is that it will lead to better agreement in the annotation and consequently to better parsing results. For this aim, it was built an annotated corpus according to this refinement and it was observed an improvement in the annotation agreement. Based on this corpus, a parser was developed using machine learning techniques and hand-crafted rules. Specifically, hierarchical techniques were used to capture the hierarchical organization of the relations according to the proposed refinement of CST. These two approaches were used to identify the relations among texts spans and to generate multi-document annotation structure. Results outperformed other CST parsers, showing the adequacy of the proposed refinement in the theory.  相似文献   

13.
Information extraction is one of the important tasks in the field of Natural Language Processing (NLP). Most of the existing methods focus on general texts and little attention is paid to information extraction in specialized domains such as legal texts. This paper explores the task of information extraction in the legal field, which aims to extract evidence information from court record documents (CRDs). In the general domain, entities and relations are mostly words and phrases, indicating that they do not span multiple sentences. In contrast, evidence information in CRDs may span multiple sentences, while existing models cannot handle this situation. To address this issue, we first add a classification task in addition to the extraction task. We then formulate the two tasks as a multi-task learning problem and present a novel end-to-end model to jointly address the two tasks. The joint model adopts a shared encoder followed by separate decoders for the two tasks. The experimental results on the dataset show the effectiveness of the proposed model, which can obtain 72.36% F1 score, outperforming previous methods and strong baselines by a large margin.  相似文献   

14.
Subjectivity detection is a task of natural language processing that aims to remove ‘factual’ or ‘neutral’ content, i.e., objective text that does not contain any opinion, from online product reviews. Such a pre-processing step is crucial to increase the accuracy of sentiment analysis systems, as these are usually optimized for the binary classification task of distinguishing between positive and negative content. In this paper, we extend the extreme learning machine (ELM) paradigm to a novel framework that exploits the features of both Bayesian networks and fuzzy recurrent neural networks to perform subjectivity detection. In particular, Bayesian networks are used to build a network of connections among the hidden neurons of the conventional ELM configuration in order to capture dependencies in high-dimensional data. Next, a fuzzy recurrent neural network inherits the overall structure generated by the Bayesian networks to model temporal features in the predictor. Experimental results confirmed the ability of the proposed framework to deal with standard subjectivity detection problems and also proved its capacity to address portability across languages in translation tasks.  相似文献   

15.
专利的相关性检索与集成应用研究   总被引:1,自引:1,他引:0       下载免费PDF全文
摘要:阐述了专利分析技术和专业应用系统的现状和特点,提出通过专利模型树来描述专利文档,并以专利模型树为基础,建立了基于向量空间模型的专利分类方法和专利相似性检索方法。基于上述方法,在工作流管理系统中集成专利管理系统,建立了集成框架,开发了集成系统,实现了在企业工作流程中的每个工作单元与专利相似性检索模块的集成。最后,在某企业的电缓速器设计的工作流系统中得以应用。  相似文献   

16.
Three-way opinion classification (3WOC) models are based on a human perspective of opinion classification and offer human-like decision-making capabilities. The purpose of this study was to determine the effectiveness of a three-way decision-making framework with multiple features (fuzzy features and semantic features) in simulating human judgement of opinions. This was an quantitative study. A simple prototype of the three-way decision model was run against the Amazon Musical Instrument dataset to evaluate the model. The data used to verify the results were collected from 125 respondents via an online survey. The participants tested the model in context, then immediately filled in the online questionnaire. Results show that the statistical correlation between semantic features and fuzzy feature is low. Therefore, classification coverage and accuracy can be increased when both types of features are used together rather than using one type of feature alone. With the integration of semantic features and fuzzy features, we found that our three-way decision model performs better than a two-way classification model. Furthermore, the 3WOC model is a simulation of human judgements executed when people make decisions. Finally, we offer usability recommendations based on our analysis. A three-way decision-making framework is a better solution to simulate human judgement of opinion classification than a two-way decision model. The research outcomes will help in the development of better opinion classification systems that can support businesses and organisations to make strategic plans to improve their products or services based on customer preference patterns.  相似文献   

17.
With the advent of various services and applications of Semantic Web, semantic annotation has emerged as an important research topic. The application of semantically annotated ontology had been evident in numerous information processing and retrieval tasks. One of such tasks is utilizing the semantically annotated ontology in product design which is able to suggest many important applications that are critical to aid various design related tasks. However, ontology development in design engineering remains a time consuming and tedious task that demands considerable human efforts. In the context of product family design, management of different product information that features efficient indexing, update, navigation, search and retrieval across product families is both desirable and challenging. For instance, an efficient way of retrieving timely information on product family can be useful for tasks such as product family redesign and new product variant derivation when requirements change. However, the current research and application of information search and navigation in product family is mostly limited to its structural aspect which is insufficient to handle advanced information search especially when the query targets at multiple aspects of a product. This paper attempts to address this problem by proposing an information search and retrieval framework based on the semantically annotated multi-facet product family ontology. Particularly, we propose a document profile (DP) model to suggest semantic tags for annotation purpose. Using a case study of digital camera families, we illustrate how the faceted search and retrieval of product information can be accomplished. We also exemplify how we can derive new product variants based on the designer’s query of requirements via the faceted search and retrieval of product family information. Lastly, in order to highlight the value of our current work, we briefly discuss some further research and applications in design decision support, e.g. commonality analysis and variety comparison, based on the semantically annotated multi-facet product family ontology.  相似文献   

18.
The classical probabilistic models attempt to capture the ad hoc information retrieval problem within a rigorous probabilistic framework. It has long been recognized that the primary obstacle to the effective performance of the probabilistic models is the need to estimate a relevance model. The Dirichlet compound multinomial (DCM) distribution based on the Polya Urn scheme, which can also be considered as a hierarchical Bayesian model, is a more appropriate generative model than the traditional multinomial distribution for text documents. We explore a new probabilistic model based on the DCM distribution, which enables efficient retrieval and accurate ranking. Because the DCM distribution captures the dependency of repetitive word occurrences, the new probabilistic model based on this distribution is able to model the concavity of the score function more effectively. To avoid the empirical tuning of retrieval parameters, we design several parameter estimation algorithms to automatically set model parameters. Additionally, we propose a pseudo-relevance feedback algorithm based on the mixture modeling of the Dirichlet compound multinomial distribution to further improve retrieval accuracy. Finally, our experiments show that both the baseline probabilistic retrieval algorithm based on the DCM distribution and the corresponding pseudo-relevance feedback algorithm outperform the existing language modeling systems on several TREC retrieval tasks. The main objective of this research is to develop an effective probabilistic model based on the DCM distribution. A secondary objective is to provide a thorough understanding of the probabilistic retrieval model by a theoretical understanding of various text distribution assumptions.  相似文献   

19.
Graph neural networks (GNN) have emerged as a new state-of-the-art for learning knowledge graph representations. Although they have shown impressive performance in recent studies, how to efficiently and effectively aggregate neighboring features is not well designed. To tackle this challenge, we propose the simplifying heterogeneous graph neural network (SHGNet), a generic framework that discards the two standard operations in GNN, including the transformation matrix and nonlinear activation. SHGNet, in particular, adopts only the essential component of neighborhood aggregation in GNN and incorporates relation features into feature propagation. Furthermore, to capture complex structures, SHGNet utilizes a hierarchical aggregation architecture, including node aggregation and relation weighting. Thus, the proposed model can treat each relation differently and selectively aggregate informative features. SHGNet has been evaluated for link prediction tasks on three real-world benchmark datasets. The experimental results show that SHGNet significantly promotes efficiency while maintaining superior performance, outperforming all the existing models in 3 out of 4 metrics on NELL-995 and in 4 out of 4 metrics on FB15k-237 dataset.  相似文献   

20.
In this paper, a new robust relevance model is proposed that can be applied to both pseudo and true relevance feedback in the language-modeling framework for document retrieval. There are at least three main differences between our new relevance model and other relevance models. The proposed model brings back the original query into the relevance model by treating it as a short, special document, in addition to a number of top-ranked documents returned from the first round retrieval for pseudo feedback, or a number of relevant documents for true relevance feedback. Second, instead of using a uniform prior as in the original relevance model proposed by Lavrenko and Croft, documents are assigned with different priors according to their lengths (in terms) and ranks in the first round retrieval. Third, the probability of a term in the relevance model is further adjusted by its probability in a background language model. In both pseudo and true relevance cases, we have compared the performance of our model to that of the two baselines: the original relevance model and a linear combination model. Our experimental results show that the proposed new model outperforms both of the two baselines in terms of mean average precision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号