共查询到20条相似文献,搜索用时 109 毫秒
1.
本体表示语言转换技术研究综述 总被引:1,自引:0,他引:1
在介绍本体表示语言发展概况的基础上,从主要元素和推理机制对8种常用本体表示语言进行分析研究。探讨本体表示语言的语法和语义问题,指出本体表示语言的转换技术是本体整合、共享和应用的前提。将本体表示语言转换分为语法转换和语义转换两个部分,介绍和比较现有本体表示语言转换模型和转换工具。最后,讨论存在的问题和未来的研究方向。 相似文献
2.
3.
4.
5.
6.
跨语言信息检索中的询问翻译方法及其研究进展 总被引:10,自引:0,他引:10
主要介绍了跨语言文本信息检索的三类基本方法:询问翻译、文献翻译和不翻译,并且对目前最常用的询问翻译方法所涉及的一些基本问题及其研究进展进行了阐述,最后总结出跨语言信息检索的现状和动向。 相似文献
7.
自然语言语义分析研究进展 总被引:5,自引:0,他引:5
按照自然语言的构成层次——词语、句子和篇章,分析各层次语义分析的内涵、现有的研究策略、理论依据及存在的主要方法,并对现存的两类主要研究策略进行对比分析.认为词语语义分析是指确定词语意义,衡量两个词之间的语义相似度或相关度;句子语义分析研究包含句义分析和句义相似度分析两方面;文本语义分析就是识别文本的意义、主题、类别等语义信息的过程.当前的自然语言语义分析主要存在两种主要的研究策略:基于知识或语义学规则的语义分析和基于统计学的语义分析.基于统计与规则相融合的语义分析方法是未来自然语言语义分析的主流方法,本体语义学是自然语言语义分析的重要基础. 相似文献
8.
在移动互联网时代,移动阅读、碎片化阅读已经成为人们阅读的主流方式。在用户阅读过程中,提供摘要内容以提高阅读效率是解决信息过载问题的重要途径之一。科技研究论文文本长、内容广且包含领域知识,其摘要生成任务相比于新闻等普通文本更具有挑战性。本文提出了一种科技论文结构化摘要方法。首先,将科技论文划分为不同的语步;其次,分别对不同语步文本进行抽取式摘要,将文本多特征按权重融入TextRank算法的迭代计算过程中,引入MMR (maximal marginal relevance)算法对预选摘要集进行冗余处理;最后,使用依存句法分析对文本进行语义分析,进一步精简摘要,并组合成结构化摘要。研究结果表明,相比于基准模型,该方法在不同语步的相关性、多样性和可读性指标提升上具有一定差异;结合人工评价发现,该方法在显著提升摘要多样性的同时,一定程度上提升了摘要的相关性和可读性。 相似文献
9.
为减少一词多义现象及训练样本的类偏斜问题对分类性能的影响,提出一种基于语义网络社团划分的中文文本分类算法。通过维基百科知识库对文本特征词进行消歧,构建出训练语义复杂网络以表示文本间的语义关系,再次结合节点特性采用K-means算法对训练集进行社团划分以改善类偏斜问题,进而查找待分类文本的最相近社团并以此为基础进行文本分类。实验结果表明,本文所提出的中文文本分类算法是可行的,且具有较好的分类效果。 相似文献
10.
面向科技文献的多模态语义关联特征提取与表达体系研究 总被引:1,自引:0,他引:1
科技文献资源是一种多模态数据,除文本信息外,还包含丰富的图像、表格、公式、音频、视频等多种模态的信息,有利于用户充分理解科技文献资源中的知识。该文把多模态思想引入科技文献的语义表示方面,对科技文献中的图像、表格和公式信息进行语义分析,与文本信息共同表示文献语义内容,通过科技文献中多种模态信息的语义表示及相互关系完善科技文献内容的语义化表示,发展刻画科技文献对象多态性的表达体系。 相似文献
11.
Multilingual retrieval (querying of multiple document collections each in a different language) can be achieved by combining several individual techniques which enhance retrieval: machine translation to cross the language barrier, relevance feedback to add words to the initial query, decompounding for languages with complex term structure, and data fusion to combine monolingual retrieval results from different languages. Using the CLEF 2001 and CLEF 2002 topics and document collections, this paper evaluates these techniques within the context of a monolingual document ranking formula based upon logistic regression. Each individual technique yields improved performance over runs which do not utilize that technique. Moreover the techniques are complementary, in that combining the best techniques outperforms individual technique performance. An approximate but fast document translation using bilingual wordlists created from machine translation systems is presented and evaluated. The fast document translation is as effective as query translation in multilingual retrieval. Furthermore, when fast document translation is combined with query translation in multilingual retrieval, the performance is significantly better than that of query translation or fast document translation. 相似文献
12.
13.
Multilingual information retrieval is generally understood to mean the retrieval of relevant information in multiple target
languages in response to a user query in a single source language. In a multilingual federated search environment, different
information sources contain documents in different languages. A general search strategy in multilingual federated search environments
is to translate the user query to each language of the information sources and run a monolingual search in each information
source. It is then necessary to obtain a single ranked document list by merging the individual ranked lists from the information
sources that are in different languages. This is known as the results merging problem for multilingual information retrieval.
Previous research has shown that the simple approach of normalizing source-specific document scores is not effective. On the
other side, a more effective merging method was proposed to download and translate all retrieved documents into the source
language and generate the final ranked list by running a monolingual search in the search client. The latter method is more
effective but is associated with a large amount of online communication and computation costs. This paper proposes an effective
and efficient approach for the results merging task of multilingual ranked lists. Particularly, it downloads only a small
number of documents from the individual ranked lists of each user query to calculate comparable document scores by utilizing
both the query-based translation method and the document-based translation method. Then, query-specific and source-specific
transformation models can be trained for individual ranked lists by using the information of these downloaded documents. These
transformation models are used to estimate comparable document scores for all retrieved documents and thus the documents can
be sorted into a final ranked list. This merging approach is efficient as only a subset of the retrieved documents are downloaded
and translated online. Furthermore, an extensive set of experiments on the Cross-Language Evaluation Forum (CLEF) () data has demonstrated the effectiveness of the query-specific and source-specific results merging algorithm against other
alternatives. The new research in this paper proposes different variants of the query-specific and source-specific results
merging algorithm with different transformation models. This paper also provides thorough experimental results as well as
detailed analysis. All of the work substantially extends the preliminary research in (Si and Callan, in: Peters (ed.) Results
of the cross-language evaluation forum-CLEF 2005, 2005).
相似文献
Hao YuanEmail: |
14.
15.
首先分析相关应用案例,说明多语言领域本体在数字图书馆领域的潜在应用价值,然后阐述数字图书馆环境下多语言领域本体学习的特点,由此给出面向数字图书馆应用的多语言领域本体学习基本框架,接着说明其中涉及到的若干关键技术与本课题组的相关研究工作,最后对未来的研究提出展望. 相似文献
16.
本体在跨语言信息检索中的应用机制研究 总被引:3,自引:1,他引:2
解释多语本体的含义,指出其在不同语言中所对应的领域知识,分析多语本体在查询扩展、语义标注、基于概念索引3方面对改善跨语言信息检索的作用,并通过介绍EuroWorldNet和Cindor系统的多语本体概念的对应方法,探讨本体应用于跨语言信息检索最关键的多语本体库的映射方法,认为采用中间语言作为概念表示、并通过词典翻译对照与不同语种的词汇建立链接关系是多语本体映射的一种良好方法。 相似文献
17.
We present a system for multilingual information retrieval that allows users to formulate queries in their preferred language and retrieve relevant information from a collection containing documents in multiple languages. The system is based on a process of document level alignments, where documents of different languages are paired according to their similarity. The resulting mapping allows us to produce a multilingual comparable corpus. Such a corpus has multiple interesting applications. It allows us to build a data structure for query translation in cross-language information retrieval (CLIR). Moreover, we also perform pseudo relevance feedback on the alignments to improve our retrieval results. And finally, multiple retrieval runs can be merged into one unified result list. The resulting system is inexpensive, adaptable to domain-specific collections and new languages and has performed very well at the TREC-7 conference CLIR system comparison. 相似文献
18.
For the purposes of classification it is common to represent a document as a bag of words. Such a representation consists of the individual terms making up the document together with the number of times each term appears in the document. All classification methods make use of the terms. It is common to also make use of the local term frequencies at the price of some added complication in the model. Examples are the naïve Bayes multinomial model (MM), the Dirichlet compound multinomial model (DCM) and the exponential-family approximation of the DCM (EDCM), as well as support vector machines (SVM). Although it is usually claimed that incorporating local word frequency in a document improves text classification performance, we here test whether such claims are true or not. In this paper we show experimentally that simplified forms of the MM, EDCM, and SVM models which ignore the frequency of each word in a document perform about at the same level as MM, DCM, EDCM and SVM models which incorporate local term frequency. We also present a new form of the naïve Bayes multivariate Bernoulli model (MBM) which is able to make use of local term frequency and show again that it offers no significant advantage over the plain MBM. We conclude that word burstiness is so strong that additional occurrences of a word essentially add no useful information to a classifier. 相似文献
19.