共查询到20条相似文献,搜索用时 687 毫秒
1.
Due to the heavy use of gene synonyms in biomedical text, people have tried many query expansion techniques using synonyms
in order to improve performance in biomedical information retrieval. However, mixed results have been reported. The main challenge
is that it is not trivial to assign appropriate weights to the added gene synonyms in the expanded query; under-weighting
of synonyms would not bring much benefit, while overweighting some unreliable synonyms can hurt performance significantly.
So far, there has been no systematic evaluation of various synonym query expansion strategies for biomedical text. In this
work, we propose two different strategies to extend a standard language modeling approach for gene synonym query expansion
and conduct a systematic evaluation of these methods on all the available TREC biomedical text collections for ad hoc document
retrieval. Our experiment results show that synonym expansion can significantly improve the retrieval accuracy. However, different
query types require different synonym expansion methods, and appropriate weighting of gene names and synonym terms is critical
for improving performance.
相似文献
Chengxiang ZhaiEmail: |
2.
Precision prediction based on ranked list coherence 总被引:1,自引:0,他引:1
We introduce a statistical measure of the coherence of a list of documents called the clarity score. Starting with a document list ranked by the query-likelihood retrieval model, we demonstrate the score's relationship to query ambiguity with respect to the collection. We also show that the clarity score is correlated with the average precision of a query and lay the groundwork for useful predictions by discussing a method of setting decision thresholds automatically. We then show that passage-based clarity scores correlate with average-precision measures of ranked lists of passages, where a passage is judged relevant if it contains correct answer text, which extends the basic method to passage-based systems. Next, we introduce variants of document-based clarity scores to improve the robustness, applicability, and predictive ability of clarity scores. In particular, we introduce the ranked list clarity score that can be computed with only a ranked list of documents, and the weighted clarity score where query terms contribute more than other terms. Finally, we show an approach to predicting queries that perform poorly on query expansion that uses techniques expanding on the ideas presented earlier.
相似文献
W. Bruce CroftEmail: |
3.
Fotis Lazarinis Jesús Vilares John Tait Efthimis N. Efthimiadis 《Information Retrieval》2009,12(3):230-250
With increasingly higher numbers of non-English language web searchers the problems of efficient handling of non-English Web
documents and user queries are becoming major issues for search engines. The main aim of this review paper is to make researchers
aware of the existing problems in monolingual non-English Web retrieval by providing an overview of open issues. A significant
number of papers are reviewed and the research issues investigated in these studies are categorized in order to identify the
research questions and solutions proposed in these papers. Further research is proposed at the end of each section.
相似文献
Efthimis N. EfthimiadisEmail: |
4.
Query structuring and expansion with two-stage term dependence for Japanese web retrieval 总被引:1,自引:1,他引:0
In this paper, we propose a new term dependence model for information retrieval, which is based on a theoretical framework
using Markov random fields. We assume two types of dependencies of terms given in a query: (i) long-range dependencies that
may appear for instance within a passage or a sentence in a target document, and (ii) short-range dependencies that may appear
for instance within a compound word in a target document. Based on this assumption, our two-stage term dependence model captures
both long-range and short-range term dependencies differently, when more than one compound word appear in a query. We also
investigate how query structuring with term dependence can improve the performance of query expansion using a relevance model.
The relevance model is constructed using the retrieval results of the structured query with term dependence to expand the
query. We show that our term dependence model works well, particularly when using query structuring with compound words, through
experiments using a 100-gigabyte test collection of web documents mostly written in Japanese. We also show that the performance
of the relevance model can be significantly improved by using the structured query with our term dependence model.
相似文献
Koji EguchiEmail: |
5.
We present software that generates phrase-based concordances in real-time based on Internet searching. When a user enters
a string of words for which he wants to find concordances, the system sends this string as a query to a search engine and
obtains search results for the string. The concordances are extracted by performing statistical analysis on search results
and then fed back to the user. Unlike existing tools, this concordance consultation tool is language-independent, so concordances
can be obtained even in a language for which there are no well-established analytical methods. Our evaluation has revealed
that concordances can be obtained more effectively than by only using a search engine directly.
相似文献
Yuichiro IshiiEmail: |
6.
Arabic documents that are available only in print continue to be ubiquitous and they can be scanned and subsequently OCR’ed
to ease their retrieval. This paper explores the effect of context-based OCR correction on the effectiveness of retrieving
Arabic OCR documents using different index terms. Different OCR correction techniques based on language modeling with different
correction abilities were tested on real OCR and synthetic OCR degradation. Results show that the reduction of word error
rates needs to pass a certain limit to get a noticeable effect on retrieval. If only moderate error reduction is available,
then using short character n-gram for retrieval without error correction is not a bad strategy. Word-based correction in conjunction
with language modeling had a statistically significant impact on retrieval even for character 3-grams, which are known to
be among the best index terms for OCR degraded Arabic text. Further, using a sufficiently large language model for correction
can minimize the need for morphologically sensitive error correction.
相似文献
Kareem DarwishEmail: |
7.
Query Expansion is commonly used in Information Retrieval to overcome vocabulary mismatch issues, such as synonymy between
the original query terms and a relevant document. In general, query expansion experiments exhibit mixed results. Overall TREC
Genomics Track results are also mixed; however, results from the top performing systems provide strong evidence supporting
the need for expansion. In this paper, we examine the conditions necessary for optimal query expansion performance with respect
to two system design issues: IR framework and knowledge source used for expansion. We present a query expansion framework
that improves Okapi baseline passage MAP performance by 185%. Using this framework, we compare and contrast the effectiveness
of a variety of biomedical knowledge sources used by TREC 2006 Genomics Track participants for expansion. Based on the outcome
of these experiments, we discuss the success factors required for effective query expansion with respect to various sources
of term expansion, such as corpus-based cooccurrence statistics, pseudo-relevance feedback methods, and domain-specific and
domain-independent ontologies and databases. Our results show that choice of document ranking algorithm is the most important
factor affecting retrieval performance on this dataset. In addition, when an appropriate ranking algorithm is used, we find
that query expansion with domain-specific knowledge sources provides an equally substantive gain in performance over a baseline
system.
相似文献
Nicola StokesEmail: Email: |
8.
In retrieving medical free text, users are often interested in answers pertinent to certain scenarios that correspond to common
tasks performed in medical practice, e.g., treatment or diagnosis of a disease. A major challenge in handling such queries is that scenario terms in the query (e.g., treatment) are often too general to match specialized terms in relevant documents (e.g., chemotherapy). In this paper, we propose a knowledge-based query expansion method that exploits the UMLS knowledge source to append the
original query with additional terms that are specifically relevant to the query's scenario(s). We compared the proposed method
with traditional statistical expansion that expands terms which are statistically correlated but not necessarily scenario
specific. Our study on two standard testbeds shows that the knowledge-based method, by providing scenario-specific expansion,
yields notable improvements over the statistical method in terms of average precision-recall. On the OHSUMED testbed, for
example, the improvement is more than 5% averaging over all scenario-specific queries studied and about 10% for queries that
mention certain scenarios, such as treatment of a disease and differential diagnosis of a symptom/disease.
相似文献
Wesley W. ChuEmail: |
9.
Classifying Amharic webnews 总被引:1,自引:1,他引:0
Lars Asker Atelach Alemu Argaw Björn Gambäck Samuel Eyassu Asfeha Lemma Nigussie Habte 《Information Retrieval》2009,12(3):416-435
We present work aimed at compiling an Amharic corpus from the Web and automatically categorizing the texts. Amharic is the
second most spoken Semitic language in the World (after Arabic) and used for countrywide communication in Ethiopia. It is
highly inflectional and quite dialectally diversified. We discuss the issues of compiling and annotating a corpus of Amharic
news articles from the Web. This corpus was then used in three sets of text classification experiments. Working with a less-researched
language highlights a number of practical issues that might otherwise receive less attention or go unnoticed. The purpose
of the experiments has not primarily been to develop a cutting-edge text classification system for Amharic, but rather to
put the spotlight on some of these issues. The first two sets of experiments investigated the use of Self-Organizing Maps
(SOMs) for document classification. Testing on small datasets, we first looked at classifying unseen data into 10 predefined
categories of news items, and then at clustering it around query content, when taking 16 queries as class labels. The second
set of experiments investigated the effect of operations such as stemming and part-of-speech tagging on text classification
performance. We compared three representations while constructing classification models based on bagging of decision trees
for the 10 predefined news categories. The best accuracy was achieved using the full text as representation. A representation
using only the nouns performed almost equally well, confirming the assumption that most of the information required for distinguishing
between various categories actually is contained in the nouns, while stemming did not have much effect on the performance
of the classifier.
相似文献
Lemma Nigussie HabteEmail: |
10.
We consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every
keystroke display those completions of the last query word that would lead to the best hits, and also display the best such
hits. The following problem is at the core of this feature: for a fixed document collection, given a set D of documents, and an alphabetical range W of words, compute the set of all word-in-document pairs (w, d) from the collection such that w ∈ W and d ∈ D. We present a new data structure with the help of which such autocompletion queries can be processed, on the average, in
time linear in the input plus output size, independent of the size of the underlying document collection. At the same time,
our data structure uses no more space than an inverted index. Actual query processing times on a large test collection correlate
almost perfectly with our theoretical bound.
相似文献
Ingmar WeberEmail: |
11.
Smoothing of document language models is critical in language modeling approaches to information retrieval. In this paper,
we present a novel way of smoothing document language models based on propagating term counts probabilistically in a graph
of documents. A key difference between our approach and previous approaches is that our smoothing algorithm can iteratively
propagate counts and achieve smoothing with remotely related documents. Evaluation results on several TREC data sets show that the proposed method significantly outperforms the
simple collection-based smoothing method. Compared with those other smoothing methods that also exploit local corpus structures,
our method is especially effective in improving precision in top-ranked documents through “filling in” missing query terms
in relevant documents, which is attractive since most users only pay attention to the top-ranked documents in search engine
applications.
相似文献
ChengXiang ZhaiEmail: |
12.
Index maintenance strategies employed by dynamic text retrieval systems based on inverted files can be divided into two categories:
merge-based and in-place update strategies. Within each category, individual update policies can be distinguished based on
whether they store their on-disk posting lists in a contiguous or in a discontiguous fashion. Contiguous inverted lists, in
general, lead to higher query performance, by minimizing the disk seek overhead at query time, while discontiguous inverted
lists lead to higher update performance, requiring less effort during index maintenance operations. In this paper, we focus
on retrieval systems with high query load, where the on-disk posting lists have to be stored in a contiguous fashion at all
times. We discuss a combination of re-merge and in-place index update, called Hybrid Immediate Merge. The method performs strictly better than the re-merge baseline policy used in our experiments, as it leads to the same query
performance, but substantially better update performance. The actual time savings achievable depend on the size of the text
collection being indexed; a larger collection results in greater savings. In our experiments, variations of Hybrid Immediate Merge were able to reduce the total index update overhead by up to 73% compared to the re-merge baseline.
相似文献
Stefan BüttcherEmail: |
13.
Modeling context through domain ontologies 总被引:1,自引:0,他引:1
Nathalie Hernandez Josiane Mothe Claude Chrisment Daniel Egret 《Information Retrieval》2007,10(2):143-172
Traditional information retrieval systems aim at satisfying most users for most of their searches, leaving aside the context
in which the search takes place. We propose to model two main aspects of context: The themes of the user's information need
and the specific data the user is looking for to achieve the task that has motivated his search. Both aspects are modeled
by means of ontologies. Documents are semantically indexed according to the context representation and the user accesses information
by browsing the ontologies. The model has been applied to a case study that has shown the added value of such a semantic representation
of context.
相似文献
Daniel EgretEmail: |
14.
Andrew MacFarlane 《Information Retrieval》2009,12(2):162-178
Understanding of mathematics is needed to underpin the process of search, either explicitly with Exact Match (Boolean logic,
adjacency) or implicitly with Best match natural language search. In this paper we outline some pedagogical challenges in
teaching mathematics for information retrieval (IR) to postgraduate information science students. The aim is to take these
challenges either found by experience or in the literature, to identify both theoretical and practical ideas in order to improve
the delivery of the material and positively affect the learning of the target audience by using a tutorial style of teaching.
Results show that there is evidence to support the notion that a more pro-active style of teaching using tutorials yield benefits
both in terms of assessment results and student satisfaction.
相似文献
Andrew MacFarlaneEmail: |
15.
To put an end to the large copyright trade deficit, both Chinese government agencies and publishing houses have been striving
for entering the international publication market. The article analyzes the background of the going-global strategy, and sums
up the performance of both Chinese administrations and publishers.
相似文献
Qing Fang (Corresponding author)Email: |
16.
Nieves R. Brisaboa Antonio Fariña Gonzalo Navarro José R. Paramá 《Information Retrieval》2007,10(1):1-33
Variants of Huffman codes where words are taken as the source symbols are currently the most attractive choices to compress
natural language text databases. In particular, Tagged Huffman Code by Moura et al. offers fast direct searching on the compressed
text and random access capabilities, in exchange for producing around 11% larger compressed files. This work describes End-Tagged
Dense Code and (s, c)-Dense Code, two new semistatic statistical methods for compressing natural language texts. These techniques permit simpler
and faster encoding and obtain better compression ratios than Tagged Huffman Code, while maintaining its fast direct search
and random access capabilities. We show that Dense Codes improve Tagged Huffman Code compression ratio by about 10%, reaching
only 0.6% overhead over the optimal Huffman compression ratio. Being simpler, Dense Codes are generated 45% to 60% faster
than Huffman codes. This makes Dense Codes a very attractive alternative to Huffman code variants for various reasons: they
are simpler to program, faster to build, of almost optimal size, and as fast and easy to search as the best Huffman variants,
which are not so close to the optimal size.
相似文献
José R. ParamáEmail: |
17.
Jacob Soll 《Archival Science》2007,7(4):331-342
This article examines the archival methods developed by Colbert to train his son in state administration. Based on Colbert’s
correspondence with his son, it reveals the practices Colbert thought necessary to collect and manage information in his state
encyclopedic archive during the last half of the 17th century.
相似文献
Jacob SollEmail: |
18.
Andy Weissberg 《Publishing Research Quarterly》2008,24(4):255-260
This article analyzes current industry practices toward the identification of digital book content. It highlights key technology
trends, workflow considerations and supply chain behaviors, and examines the implications of these trends and behaviors on
the production, discoverability, purchasing and consumption of digital book products.
相似文献
Andy WeissbergEmail: |
19.
Sandeep Chaufla 《Publishing Research Quarterly》2008,24(3):187-201
A review and analysis of the rules and regulations including the tax aspects of making an investment in India is presented.
The full range from Foreign Direct Investment to different forms of doing business with specific examples from the publishing
industry is explored to help understand current policies and regulations.
相似文献
Sandeep ChauflaEmail: Email: |
20.
Oren Kurland 《Information Retrieval》2009,12(4):437-460
To obtain high precision at top ranks by a search performed in response to a query, researchers have proposed a cluster-based
re-ranking paradigm: clustering an initial list of documents that are the most highly ranked by some initial search, and using
information induced from these (often called) query-specific clusters for re-ranking the list. However, results concerning the effectiveness of various automatic cluster-based re-ranking methods have been inconclusive. We show that using query-specific clusters for automatic re-ranking
of top-retrieved documents is effective with several methods in which clusters play different roles, among which is the smoothing of document language models. We do so by adapting previously-proposed cluster-based retrieval approaches, which are based on (static) query-independent
clusters for ranking all documents in a corpus, to the re-ranking setting wherein clusters are query-specific. The best performing
method that we develop outperforms both the initial document-based ranking and some previously proposed cluster-based re-ranking
approaches; furthermore, this algorithm consistently outperforms a state-of-the-art pseudo-feedback-based approach. In further
exploration we study the performance of cluster-based smoothing methods for re-ranking with various (soft and hard) clustering
algorithms, and demonstrate the importance of clusters in providing context from the initial list through a comparison to
using single documents to this end.
相似文献
Oren KurlandEmail: |