共查询到20条相似文献,搜索用时 468 毫秒
1.
Query Expansion is commonly used in Information Retrieval to overcome vocabulary mismatch issues, such as synonymy between
the original query terms and a relevant document. In general, query expansion experiments exhibit mixed results. Overall TREC
Genomics Track results are also mixed; however, results from the top performing systems provide strong evidence supporting
the need for expansion. In this paper, we examine the conditions necessary for optimal query expansion performance with respect
to two system design issues: IR framework and knowledge source used for expansion. We present a query expansion framework
that improves Okapi baseline passage MAP performance by 185%. Using this framework, we compare and contrast the effectiveness
of a variety of biomedical knowledge sources used by TREC 2006 Genomics Track participants for expansion. Based on the outcome
of these experiments, we discuss the success factors required for effective query expansion with respect to various sources
of term expansion, such as corpus-based cooccurrence statistics, pseudo-relevance feedback methods, and domain-specific and
domain-independent ontologies and databases. Our results show that choice of document ranking algorithm is the most important
factor affecting retrieval performance on this dataset. In addition, when an appropriate ranking algorithm is used, we find
that query expansion with domain-specific knowledge sources provides an equally substantive gain in performance over a baseline
system.
相似文献
Nicola StokesEmail: Email: |
2.
Index maintenance strategies employed by dynamic text retrieval systems based on inverted files can be divided into two categories:
merge-based and in-place update strategies. Within each category, individual update policies can be distinguished based on
whether they store their on-disk posting lists in a contiguous or in a discontiguous fashion. Contiguous inverted lists, in
general, lead to higher query performance, by minimizing the disk seek overhead at query time, while discontiguous inverted
lists lead to higher update performance, requiring less effort during index maintenance operations. In this paper, we focus
on retrieval systems with high query load, where the on-disk posting lists have to be stored in a contiguous fashion at all
times. We discuss a combination of re-merge and in-place index update, called Hybrid Immediate Merge. The method performs strictly better than the re-merge baseline policy used in our experiments, as it leads to the same query
performance, but substantially better update performance. The actual time savings achievable depend on the size of the text
collection being indexed; a larger collection results in greater savings. In our experiments, variations of Hybrid Immediate Merge were able to reduce the total index update overhead by up to 73% compared to the re-merge baseline.
相似文献
Stefan BüttcherEmail: |
3.
Query structuring and expansion with two-stage term dependence for Japanese web retrieval 总被引:1,自引:1,他引:0
In this paper, we propose a new term dependence model for information retrieval, which is based on a theoretical framework
using Markov random fields. We assume two types of dependencies of terms given in a query: (i) long-range dependencies that
may appear for instance within a passage or a sentence in a target document, and (ii) short-range dependencies that may appear
for instance within a compound word in a target document. Based on this assumption, our two-stage term dependence model captures
both long-range and short-range term dependencies differently, when more than one compound word appear in a query. We also
investigate how query structuring with term dependence can improve the performance of query expansion using a relevance model.
The relevance model is constructed using the retrieval results of the structured query with term dependence to expand the
query. We show that our term dependence model works well, particularly when using query structuring with compound words, through
experiments using a 100-gigabyte test collection of web documents mostly written in Japanese. We also show that the performance
of the relevance model can be significantly improved by using the structured query with our term dependence model.
相似文献
Koji EguchiEmail: |
4.
Precision prediction based on ranked list coherence 总被引:1,自引:0,他引:1
We introduce a statistical measure of the coherence of a list of documents called the clarity score. Starting with a document list ranked by the query-likelihood retrieval model, we demonstrate the score's relationship to query ambiguity with respect to the collection. We also show that the clarity score is correlated with the average precision of a query and lay the groundwork for useful predictions by discussing a method of setting decision thresholds automatically. We then show that passage-based clarity scores correlate with average-precision measures of ranked lists of passages, where a passage is judged relevant if it contains correct answer text, which extends the basic method to passage-based systems. Next, we introduce variants of document-based clarity scores to improve the robustness, applicability, and predictive ability of clarity scores. In particular, we introduce the ranked list clarity score that can be computed with only a ranked list of documents, and the weighted clarity score where query terms contribute more than other terms. Finally, we show an approach to predicting queries that perform poorly on query expansion that uses techniques expanding on the ideas presented earlier.
相似文献
W. Bruce CroftEmail: |
5.
Towards enhancing retrieval effectiveness of search engines for diacritisized Arabic documents 总被引:1,自引:1,他引:0
Bassam H. Hammo 《Information Retrieval》2009,12(3):300-323
6.
Fernando Diaz 《Information Retrieval》2007,10(6):531-562
We adapt the cluster hypothesis for score-based information retrieval by claiming that closely related documents should have
similar scores. Given a retrieval from an arbitrary system, we describe an algorithm which directly optimizes this objective
by adjusting retrieval scores so that topically related documents receive similar scores. We refer to this process as score
regularization. Because score regularization operates on retrieval scores, regardless of their origin, we can apply the technique
to arbitrary initial retrieval rankings. Document rankings derived from regularized scores, when compared to rankings derived
from un-regularized scores, consistently and significantly result in improved performance given a variety of baseline retrieval
algorithms. We also present several proofs demonstrating that regularization generalizes methods such as pseudo-relevance
feedback, document expansion, and cluster-based retrieval. Because of these strong empirical and theoretical results, we argue
for the adoption of score regularization as general design principle or post-processing step for information retrieval systems.
相似文献
Fernando DiazEmail: |
7.
Diego Reforgiato Recupero 《Information Retrieval》2007,10(6):563-579
Text document clustering provides an effective and intuitive navigation mechanism to organize a large amount of retrieval
results by grouping documents in a small number of meaningful classes. Many well-known methods of text clustering make use
of a long list of words as vector space which is often unsatisfactory for a couple of reasons: first, it keeps the dimensionality
of the data very high, and second, it ignores important relationships between terms like synonyms or antonyms. Our unsupervised
method solves both problems by using ANNIE and WordNet lexical categories and WordNet ontology in order to create a well structured
document vector space whose low dimensionality allows common clustering algorithms to perform well. For the clustering step
we have chosen the bisecting k-means and the Multipole tree, a modified version of the Antipole tree data structure for, respectively, their accuracy and
speed.
相似文献
Diego Reforgiato RecuperoEmail: |
8.
Document length is widely recognized as an important factor for adjusting retrieval systems. Many models tend to favor the
retrieval of either short or long documents and, thus, a length-based correction needs to be applied for avoiding any length
bias. In Language Modeling for Information Retrieval, smoothing methods are applied to move probability mass from document
terms to unseen words, which is often dependant upon document length. In this article, we perform an in-depth study of this
behavior, characterized by the document length retrieval trends, of three popular smoothing methods across a number of factors,
and its impact on the length of documents retrieved and retrieval performance. First, we theoretically analyze the Jelinek–Mercer,
Dirichlet prior and two-stage smoothing strategies and, then, conduct an empirical analysis. In our analysis we show how Dirichlet
prior smoothing caters for document length more appropriately than Jelinek–Mercer smoothing which leads to its superior retrieval
performance. In a follow up analysis, we posit that length-based priors can be used to offset any bias in the length retrieval
trends stemming from the retrieval formula derived by the smoothing technique. We show that the performance of Jelinek–Mercer
smoothing can be significantly improved by using such a prior, which provides a natural and simple alternative to decouple
the query and document modeling roles of smoothing. With the analysis of retrieval behavior conducted in this article, it
is possible to understand why the Dirichlet Prior smoothing performs better than the Jelinek–Mercer, and why the performance
of the Jelinek–Mercer method is improved by including a length-based prior.
相似文献
Leif AzzopardiEmail: |
9.
Features for image retrieval: an experimental comparison 总被引:6,自引:0,他引:6
10.
Intelligent use of the many diverse forms of data available on the Internet requires new tools for managing and manipulating
heterogeneous forms of information. This paper uses WHIRL, an extension of relational databases that can manipulate textual
data using statistical similarity measures developed by the information retrieval community. We show that although WHIRL is
designed for more general similarity-based reasoning tasks, it is competitive with mature systems designed explicitly for
inductive classification. In particular, WHIRL is well suited for combining different sources of knowledge in the classification
process. We show on a diverse set of tasks that the use of appropriate sets of unlabeled background knowledge often decreases
error rates, particularly if the number of examples or the size of the strings in the training set is small. This is especially
useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular
problem on the World Wide Web.
相似文献
Haym HirshEmail: |
11.
Result merging methods in distributed information retrieval with overlapping databases 总被引:5,自引:0,他引:5
In distributed information retrieval systems, document overlaps occur frequently among different component databases. This
paper presents an experimental investigation and evaluation of a group of result merging methods including the shadow document
method and the multi-evidence method in the environment of overlapping databases. We assume, with the exception of resultant
document lists (either with rankings or scores), no extra information about retrieval servers and text databases is available,
which is the usual case for many applications on the Internet and the Web.
The experimental results show that the shadow document method and the multi-evidence method are the two best methods when
overlap is high, while Round-robin is the best for low overlap. The experiments also show that [0,1] linear normalization
is a better option than linear regression normalization for result merging in a heterogeneous environment.
相似文献
Sally McCleanEmail: |
12.
In retrieving medical free text, users are often interested in answers pertinent to certain scenarios that correspond to common
tasks performed in medical practice, e.g., treatment or diagnosis of a disease. A major challenge in handling such queries is that scenario terms in the query (e.g., treatment) are often too general to match specialized terms in relevant documents (e.g., chemotherapy). In this paper, we propose a knowledge-based query expansion method that exploits the UMLS knowledge source to append the
original query with additional terms that are specifically relevant to the query's scenario(s). We compared the proposed method
with traditional statistical expansion that expands terms which are statistically correlated but not necessarily scenario
specific. Our study on two standard testbeds shows that the knowledge-based method, by providing scenario-specific expansion,
yields notable improvements over the statistical method in terms of average precision-recall. On the OHSUMED testbed, for
example, the improvement is more than 5% averaging over all scenario-specific queries studied and about 10% for queries that
mention certain scenarios, such as treatment of a disease and differential diagnosis of a symptom/disease.
相似文献
Wesley W. ChuEmail: |
13.
Arabic documents that are available only in print continue to be ubiquitous and they can be scanned and subsequently OCR’ed
to ease their retrieval. This paper explores the effect of context-based OCR correction on the effectiveness of retrieving
Arabic OCR documents using different index terms. Different OCR correction techniques based on language modeling with different
correction abilities were tested on real OCR and synthetic OCR degradation. Results show that the reduction of word error
rates needs to pass a certain limit to get a noticeable effect on retrieval. If only moderate error reduction is available,
then using short character n-gram for retrieval without error correction is not a bad strategy. Word-based correction in conjunction
with language modeling had a statistically significant impact on retrieval even for character 3-grams, which are known to
be among the best index terms for OCR degraded Arabic text. Further, using a sufficiently large language model for correction
can minimize the need for morphologically sensitive error correction.
相似文献
Kareem DarwishEmail: |
14.
Annotation of the functions of genes and proteins is an essential step in genome analysis. Information extraction techniques
have been proposed to obtain the function information of genes and proteins in the biomedical literature. However, the performance
of most information extraction techniques of function annotation in the biomedical literature is not satisfactory due to the
large variability in the expression of concepts in the biomedical literature. This paper proposes a framework to improve the
gene function annotation in the literature by considering both the textual information in the literature and the functions
of genes with sequences similar to a target gene. The new framework collects multiple types of evidence as: (i) textual information
about gene functions by matching keywords of the gene functions; (ii) gene function information from the known functions of
genes with sequences similar to a target gene; and (iii) the prior probabilities of gene functions to be associated with an
arbitrary gene by mining the known gene functions from curated databases. A supervised learning method is utilized to obtain
the weights for combining the three types of evidence to assign appropriate Gene Ontology terms for target genes. Empirical
studies on two testbeds demonstrate that the combination of sequence similarity scores, function prior probabilities and textual
information improves the accuracy of gene function annotation in the literature. The F-measure scores obtained with the proposed framework are substantially higher than the scores of the solutions in prior research.
相似文献
Yi FangEmail: |
15.
E. Herrera-Viedma A. G. López-Herrera S. Alonso J. M. Moreno F. J. Cabrerizo C. Porcel 《Information Retrieval》2009,12(2):179-200
This paper describes a computer-supported learning system to teach students the principles and concepts of Fuzzy Information
Retrieval Systems based on weighted queries. This tool is used to support the teacher’s activity in the degree course Information Retrieval Systems Based on Artificial Intelligence at the Faculty of Library and Information Sciences at the University of Granada. Learning of languages of weighted queries
in Fuzzy Information Retrieval Systems is complex because it is very difficult to understand the different semantics that
could be associated to the weights of queries together with their respective strategies of query evaluation. We have developed
and implemented this computer-supported education system because it allows to support the teacher’s activity in the classroom
to teach the use of weighted queries in FIRSs and it helps students to develop self-learning processes on the use of such
queries. We have evaluated the performance of its use in the learning process according to the students’ perceptions and their
results obtained in the course’s exams. We have observed that using this software tool the students learn better the management
of the weighted query languages and then their performance in the exams is improved.
相似文献
C. PorcelEmail: |
16.
Smoothing of document language models is critical in language modeling approaches to information retrieval. In this paper,
we present a novel way of smoothing document language models based on propagating term counts probabilistically in a graph
of documents. A key difference between our approach and previous approaches is that our smoothing algorithm can iteratively
propagate counts and achieve smoothing with remotely related documents. Evaluation results on several TREC data sets show that the proposed method significantly outperforms the
simple collection-based smoothing method. Compared with those other smoothing methods that also exploit local corpus structures,
our method is especially effective in improving precision in top-ranked documents through “filling in” missing query terms
in relevant documents, which is attractive since most users only pay attention to the top-ranked documents in search engine
applications.
相似文献
ChengXiang ZhaiEmail: |
17.
To put an end to the large copyright trade deficit, both Chinese government agencies and publishing houses have been striving
for entering the international publication market. The article analyzes the background of the going-global strategy, and sums
up the performance of both Chinese administrations and publishers.
相似文献
Qing Fang (Corresponding author)Email: |
18.
Content-oriented XML retrieval approaches aim at a more focused retrieval strategy: Instead of retrieving whole documents, document components that are exhaustive to the information need while at the same time being as specific as possible should be retrieved. In this article, we show that the evaluation methods developed for standard retrieval must be modified in order to deal with the structure of XML documents. More precisely, the size and overlap of document components must be taken into account. For this purpose, we propose a new effectiveness metric based on the definition of a concept space defined upon the notions of exhaustiveness and specificity of a search result. We compare the results of this new metric by the results obtained with the official metric used in INEX, the evaluation initiative for content-oriented XML retrieval.
相似文献
Gabriella KazaiEmail: |
19.
On rank-based effectiveness measures and optimization 总被引:1,自引:0,他引:1
Many current retrieval models and scoring functions contain free parameters which need to be set—ideally, optimized. The process
of optimization normally involves some training corpus of the usual document-query-relevance judgement type, and some choice
of measure that is to be optimized. The paper proposes a way to think about the process of exploring the space of parameter
values, and how moving around in this space might be expected to affect different measures. One result, concerning local optima,
is demonstrated for a range of rank-based evaluation measures.
相似文献
Hugo ZaragozaEmail: |
20.
We present software that generates phrase-based concordances in real-time based on Internet searching. When a user enters
a string of words for which he wants to find concordances, the system sends this string as a query to a search engine and
obtains search results for the string. The concordances are extracted by performing statistical analysis on search results
and then fed back to the user. Unlike existing tools, this concordance consultation tool is language-independent, so concordances
can be obtained even in a language for which there are no well-established analytical methods. Our evaluation has revealed
that concordances can be obtained more effectively than by only using a search engine directly.
相似文献
Yuichiro IshiiEmail: |