共查询到20条相似文献,搜索用时 93 毫秒
1.
Due to the heavy use of gene synonyms in biomedical text, people have tried many query expansion techniques using synonyms
in order to improve performance in biomedical information retrieval. However, mixed results have been reported. The main challenge
is that it is not trivial to assign appropriate weights to the added gene synonyms in the expanded query; under-weighting
of synonyms would not bring much benefit, while overweighting some unreliable synonyms can hurt performance significantly.
So far, there has been no systematic evaluation of various synonym query expansion strategies for biomedical text. In this
work, we propose two different strategies to extend a standard language modeling approach for gene synonym query expansion
and conduct a systematic evaluation of these methods on all the available TREC biomedical text collections for ad hoc document
retrieval. Our experiment results show that synonym expansion can significantly improve the retrieval accuracy. However, different
query types require different synonym expansion methods, and appropriate weighting of gene names and synonym terms is critical
for improving performance.
相似文献
Chengxiang ZhaiEmail: |
2.
Precision prediction based on ranked list coherence 总被引:1,自引:0,他引:1
We introduce a statistical measure of the coherence of a list of documents called the clarity score. Starting with a document list ranked by the query-likelihood retrieval model, we demonstrate the score's relationship to query ambiguity with respect to the collection. We also show that the clarity score is correlated with the average precision of a query and lay the groundwork for useful predictions by discussing a method of setting decision thresholds automatically. We then show that passage-based clarity scores correlate with average-precision measures of ranked lists of passages, where a passage is judged relevant if it contains correct answer text, which extends the basic method to passage-based systems. Next, we introduce variants of document-based clarity scores to improve the robustness, applicability, and predictive ability of clarity scores. In particular, we introduce the ranked list clarity score that can be computed with only a ranked list of documents, and the weighted clarity score where query terms contribute more than other terms. Finally, we show an approach to predicting queries that perform poorly on query expansion that uses techniques expanding on the ideas presented earlier.
相似文献
W. Bruce CroftEmail: |
3.
Query structuring and expansion with two-stage term dependence for Japanese web retrieval 总被引:1,自引:1,他引:0
In this paper, we propose a new term dependence model for information retrieval, which is based on a theoretical framework
using Markov random fields. We assume two types of dependencies of terms given in a query: (i) long-range dependencies that
may appear for instance within a passage or a sentence in a target document, and (ii) short-range dependencies that may appear
for instance within a compound word in a target document. Based on this assumption, our two-stage term dependence model captures
both long-range and short-range term dependencies differently, when more than one compound word appear in a query. We also
investigate how query structuring with term dependence can improve the performance of query expansion using a relevance model.
The relevance model is constructed using the retrieval results of the structured query with term dependence to expand the
query. We show that our term dependence model works well, particularly when using query structuring with compound words, through
experiments using a 100-gigabyte test collection of web documents mostly written in Japanese. We also show that the performance
of the relevance model can be significantly improved by using the structured query with our term dependence model.
相似文献
Koji EguchiEmail: |
4.
Query Expansion is commonly used in Information Retrieval to overcome vocabulary mismatch issues, such as synonymy between
the original query terms and a relevant document. In general, query expansion experiments exhibit mixed results. Overall TREC
Genomics Track results are also mixed; however, results from the top performing systems provide strong evidence supporting
the need for expansion. In this paper, we examine the conditions necessary for optimal query expansion performance with respect
to two system design issues: IR framework and knowledge source used for expansion. We present a query expansion framework
that improves Okapi baseline passage MAP performance by 185%. Using this framework, we compare and contrast the effectiveness
of a variety of biomedical knowledge sources used by TREC 2006 Genomics Track participants for expansion. Based on the outcome
of these experiments, we discuss the success factors required for effective query expansion with respect to various sources
of term expansion, such as corpus-based cooccurrence statistics, pseudo-relevance feedback methods, and domain-specific and
domain-independent ontologies and databases. Our results show that choice of document ranking algorithm is the most important
factor affecting retrieval performance on this dataset. In addition, when an appropriate ranking algorithm is used, we find
that query expansion with domain-specific knowledge sources provides an equally substantive gain in performance over a baseline
system.
相似文献
Nicola StokesEmail: Email: |
5.
We consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every
keystroke display those completions of the last query word that would lead to the best hits, and also display the best such
hits. The following problem is at the core of this feature: for a fixed document collection, given a set D of documents, and an alphabetical range W of words, compute the set of all word-in-document pairs (w, d) from the collection such that w ∈ W and d ∈ D. We present a new data structure with the help of which such autocompletion queries can be processed, on the average, in
time linear in the input plus output size, independent of the size of the underlying document collection. At the same time,
our data structure uses no more space than an inverted index. Actual query processing times on a large test collection correlate
almost perfectly with our theoretical bound.
相似文献
Ingmar WeberEmail: |
6.
E. Herrera-Viedma A. G. López-Herrera S. Alonso J. M. Moreno F. J. Cabrerizo C. Porcel 《Information Retrieval》2009,12(2):179-200
This paper describes a computer-supported learning system to teach students the principles and concepts of Fuzzy Information
Retrieval Systems based on weighted queries. This tool is used to support the teacher’s activity in the degree course Information Retrieval Systems Based on Artificial Intelligence at the Faculty of Library and Information Sciences at the University of Granada. Learning of languages of weighted queries
in Fuzzy Information Retrieval Systems is complex because it is very difficult to understand the different semantics that
could be associated to the weights of queries together with their respective strategies of query evaluation. We have developed
and implemented this computer-supported education system because it allows to support the teacher’s activity in the classroom
to teach the use of weighted queries in FIRSs and it helps students to develop self-learning processes on the use of such
queries. We have evaluated the performance of its use in the learning process according to the students’ perceptions and their
results obtained in the course’s exams. We have observed that using this software tool the students learn better the management
of the weighted query languages and then their performance in the exams is improved.
相似文献
C. PorcelEmail: |
7.
Towards enhancing retrieval effectiveness of search engines for diacritisized Arabic documents 总被引:1,自引:1,他引:0
Bassam H. Hammo 《Information Retrieval》2009,12(3):300-323
8.
Compound noun segmentation is a key first step in language processing for Korean. Thus far, most approaches require some form of human supervision, such as pre-existing dictionaries, segmented compound nouns, or heuristic rules. As a result, they suffer from the unknown word problem, which can be overcome by unsupervised approaches. However, previous unsupervised methods normally do not consider all possible segmentation candidates, and/or rely on character-based segmentation clues such as bi-grams or all-length n-grams. So, they are prone to falling into a local solution. To overcome the problem, this paper proposes an unsupervised segmentation algorithm that searches the most likely segmentation result from all possible segmentation candidates using a word-based segmentation context. As word-based segmentation clues, a dictionary is automatically generated from a corpus. Experiments using three test collections show that our segmentation algorithm is successfully applied to Korean information retrieval, improving a dictionary-based longest-matching algorithm.
相似文献
Jong-Hyeok LeeEmail: |
9.
Fotis Lazarinis Jesús Vilares John Tait Efthimis N. Efthimiadis 《Information Retrieval》2009,12(3):230-250
With increasingly higher numbers of non-English language web searchers the problems of efficient handling of non-English Web
documents and user queries are becoming major issues for search engines. The main aim of this review paper is to make researchers
aware of the existing problems in monolingual non-English Web retrieval by providing an overview of open issues. A significant
number of papers are reviewed and the research issues investigated in these studies are categorized in order to identify the
research questions and solutions proposed in these papers. Further research is proposed at the end of each section.
相似文献
Efthimis N. EfthimiadisEmail: |
10.
Result merging methods in distributed information retrieval with overlapping databases 总被引:5,自引:0,他引:5
In distributed information retrieval systems, document overlaps occur frequently among different component databases. This
paper presents an experimental investigation and evaluation of a group of result merging methods including the shadow document
method and the multi-evidence method in the environment of overlapping databases. We assume, with the exception of resultant
document lists (either with rankings or scores), no extra information about retrieval servers and text databases is available,
which is the usual case for many applications on the Internet and the Web.
The experimental results show that the shadow document method and the multi-evidence method are the two best methods when
overlap is high, while Round-robin is the best for low overlap. The experiments also show that [0,1] linear normalization
is a better option than linear regression normalization for result merging in a heterogeneous environment.
相似文献
Sally McCleanEmail: |
11.
Andrew MacFarlane 《Information Retrieval》2009,12(2):162-178
Understanding of mathematics is needed to underpin the process of search, either explicitly with Exact Match (Boolean logic,
adjacency) or implicitly with Best match natural language search. In this paper we outline some pedagogical challenges in
teaching mathematics for information retrieval (IR) to postgraduate information science students. The aim is to take these
challenges either found by experience or in the literature, to identify both theoretical and practical ideas in order to improve
the delivery of the material and positively affect the learning of the target audience by using a tutorial style of teaching.
Results show that there is evidence to support the notion that a more pro-active style of teaching using tutorials yield benefits
both in terms of assessment results and student satisfaction.
相似文献
Andrew MacFarlaneEmail: |
12.
Smoothing of document language models is critical in language modeling approaches to information retrieval. In this paper,
we present a novel way of smoothing document language models based on propagating term counts probabilistically in a graph
of documents. A key difference between our approach and previous approaches is that our smoothing algorithm can iteratively
propagate counts and achieve smoothing with remotely related documents. Evaluation results on several TREC data sets show that the proposed method significantly outperforms the
simple collection-based smoothing method. Compared with those other smoothing methods that also exploit local corpus structures,
our method is especially effective in improving precision in top-ranked documents through “filling in” missing query terms
in relevant documents, which is attractive since most users only pay attention to the top-ranked documents in search engine
applications.
相似文献
ChengXiang ZhaiEmail: |
13.
Index maintenance strategies employed by dynamic text retrieval systems based on inverted files can be divided into two categories:
merge-based and in-place update strategies. Within each category, individual update policies can be distinguished based on
whether they store their on-disk posting lists in a contiguous or in a discontiguous fashion. Contiguous inverted lists, in
general, lead to higher query performance, by minimizing the disk seek overhead at query time, while discontiguous inverted
lists lead to higher update performance, requiring less effort during index maintenance operations. In this paper, we focus
on retrieval systems with high query load, where the on-disk posting lists have to be stored in a contiguous fashion at all
times. We discuss a combination of re-merge and in-place index update, called Hybrid Immediate Merge. The method performs strictly better than the re-merge baseline policy used in our experiments, as it leads to the same query
performance, but substantially better update performance. The actual time savings achievable depend on the size of the text
collection being indexed; a larger collection results in greater savings. In our experiments, variations of Hybrid Immediate Merge were able to reduce the total index update overhead by up to 73% compared to the re-merge baseline.
相似文献
Stefan BüttcherEmail: |
14.
Arabic documents that are available only in print continue to be ubiquitous and they can be scanned and subsequently OCR’ed
to ease their retrieval. This paper explores the effect of context-based OCR correction on the effectiveness of retrieving
Arabic OCR documents using different index terms. Different OCR correction techniques based on language modeling with different
correction abilities were tested on real OCR and synthetic OCR degradation. Results show that the reduction of word error
rates needs to pass a certain limit to get a noticeable effect on retrieval. If only moderate error reduction is available,
then using short character n-gram for retrieval without error correction is not a bad strategy. Word-based correction in conjunction
with language modeling had a statistically significant impact on retrieval even for character 3-grams, which are known to
be among the best index terms for OCR degraded Arabic text. Further, using a sufficiently large language model for correction
can minimize the need for morphologically sensitive error correction.
相似文献
Kareem DarwishEmail: |
15.
Fernando Diaz 《Information Retrieval》2007,10(6):531-562
We adapt the cluster hypothesis for score-based information retrieval by claiming that closely related documents should have
similar scores. Given a retrieval from an arbitrary system, we describe an algorithm which directly optimizes this objective
by adjusting retrieval scores so that topically related documents receive similar scores. We refer to this process as score
regularization. Because score regularization operates on retrieval scores, regardless of their origin, we can apply the technique
to arbitrary initial retrieval rankings. Document rankings derived from regularized scores, when compared to rankings derived
from un-regularized scores, consistently and significantly result in improved performance given a variety of baseline retrieval
algorithms. We also present several proofs demonstrating that regularization generalizes methods such as pseudo-relevance
feedback, document expansion, and cluster-based retrieval. Because of these strong empirical and theoretical results, we argue
for the adoption of score regularization as general design principle or post-processing step for information retrieval systems.
相似文献
Fernando DiazEmail: |
16.
Norbert Fuhr 《Information Retrieval》2008,11(3):251-265
The classical Probability Ranking Principle (PRP) forms the theoretical basis for probabilistic Information Retrieval (IR)
models, which are dominating IR theory since about 20 years. However, the assumptions underlying the PRP often do not hold,
and its view is too narrow for interactive information retrieval (IIR). In this article, a new theoretical framework for interactive
retrieval is proposed: The basic idea is that during IIR, a user moves between situations. In each situation, the system presents
to the user a list of choices, about which s/he has to decide, and the first positive decision moves the user to a new situation.
Each choice is associated with a number of cost and probability parameters. Based on these parameters, an optimum ordering
of the choices can the derived—the PRP for IIR. The relationship of this rule to the classical PRP is described, and issues
of further research are pointed out.
相似文献
Norbert FuhrEmail: |
17.
Multilingual information retrieval is generally understood to mean the retrieval of relevant information in multiple target
languages in response to a user query in a single source language. In a multilingual federated search environment, different
information sources contain documents in different languages. A general search strategy in multilingual federated search environments
is to translate the user query to each language of the information sources and run a monolingual search in each information
source. It is then necessary to obtain a single ranked document list by merging the individual ranked lists from the information
sources that are in different languages. This is known as the results merging problem for multilingual information retrieval.
Previous research has shown that the simple approach of normalizing source-specific document scores is not effective. On the
other side, a more effective merging method was proposed to download and translate all retrieved documents into the source
language and generate the final ranked list by running a monolingual search in the search client. The latter method is more
effective but is associated with a large amount of online communication and computation costs. This paper proposes an effective
and efficient approach for the results merging task of multilingual ranked lists. Particularly, it downloads only a small
number of documents from the individual ranked lists of each user query to calculate comparable document scores by utilizing
both the query-based translation method and the document-based translation method. Then, query-specific and source-specific
transformation models can be trained for individual ranked lists by using the information of these downloaded documents. These
transformation models are used to estimate comparable document scores for all retrieved documents and thus the documents can
be sorted into a final ranked list. This merging approach is efficient as only a subset of the retrieved documents are downloaded
and translated online. Furthermore, an extensive set of experiments on the Cross-Language Evaluation Forum (CLEF) () data has demonstrated the effectiveness of the query-specific and source-specific results merging algorithm against other
alternatives. The new research in this paper proposes different variants of the query-specific and source-specific results
merging algorithm with different transformation models. This paper also provides thorough experimental results as well as
detailed analysis. All of the work substantially extends the preliminary research in (Si and Callan, in: Peters (ed.) Results
of the cross-language evaluation forum-CLEF 2005, 2005).
相似文献
Hao YuanEmail: |
18.
We present software that generates phrase-based concordances in real-time based on Internet searching. When a user enters
a string of words for which he wants to find concordances, the system sends this string as a query to a search engine and
obtains search results for the string. The concordances are extracted by performing statistical analysis on search results
and then fed back to the user. Unlike existing tools, this concordance consultation tool is language-independent, so concordances
can be obtained even in a language for which there are no well-established analytical methods. Our evaluation has revealed
that concordances can be obtained more effectively than by only using a search engine directly.
相似文献
Yuichiro IshiiEmail: |
19.
20.
Content-oriented XML retrieval approaches aim at a more focused retrieval strategy: Instead of retrieving whole documents, document components that are exhaustive to the information need while at the same time being as specific as possible should be retrieved. In this article, we show that the evaluation methods developed for standard retrieval must be modified in order to deal with the structure of XML documents. More precisely, the size and overlap of document components must be taken into account. For this purpose, we propose a new effectiveness metric based on the definition of a concept space defined upon the notions of exhaustiveness and specificity of a search result. We compare the results of this new metric by the results obtained with the official metric used in INEX, the evaluation initiative for content-oriented XML retrieval.
相似文献
Gabriella KazaiEmail: |