首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Word-based byte-oriented compression has succeeded on large natural language text databases, by providing competitive compression ratios, fast random access, and direct sequential searching. We show that by just rearranging the target symbols of the compressed text into a tree-shaped structure, and using negligible additional space, we obtain a new implicitly indexed representation of the compressed text, where search times are drastically improved. The occurrences of a word can be listed directly, without any text scanning, and in general any inverted-index-like capability, such as efficient phrase searches, can be emulated without storing any inverted list information. We experimentally show that our proposal performs not only much more efficiently than sequential searches over compressed text, but also than explicit inverted indexes and other types of indexes, when using little extra space. Our representation is especially successful when searching for single words and short phrases.  相似文献   

2.
Binary Interpolative Coding for Effective Index Compression   总被引:5,自引:1,他引:4  
Information retrieval systems contain large volumes of text, and currently have typical sizes into the gigabyte range. Inverted indexes are one important method for providing search facilities into these collections, but unless compressed require a great deal of space. In this paper we introduce a new method for compressing inverted indexes that yields excellent compression, fast decoding, and exploits clustering—the tendency for words to appear relatively frequently in some parts of the collection and infrequently in others. We also describe two other quite separate applications for the same compression method: representing the MTF list positions generated by the Burrows-Wheeler Block Sorting transformation; and transmitting the codebook for semi-static block-based minimum-redundancy coding.  相似文献   

3.
Compressing Inverted Files   总被引:2,自引:0,他引:2  
Research into inverted file compression has focused on compression ratio—how small the indexes can be. Compression ratio is important for fast interactive searching. It is taken as read, the smaller the index, the faster the search.The premise smaller is better may not be true. To truly build faster indexes it is often necessary to forfeit compression. For inverted lists consisting of only 128 occurrences compression may only add overhead. Perhaps the inverted list could be stored in 128 bytes in place of 128 words, but it must still be stored on disk. If the minimum disk sector read size is 512 bytes and the word size is 4 bytes, then both the compressed and raw postings would require one disk seek and one disk sector read. A less efficient compression technique may increase the file size, but decrease load/decompress time, thereby increasing throughput.Examined here are five compression techniques, Golomb, Elias gamma, Elias delta, Variable Byte Encoding and Binary Interpolative Coding. The effect on file size, file seek time, and file read time are all measured as is decompression time. A quantitative measure of throughput is developed and the performance of each method is determined.  相似文献   

4.
Variants of Huffman codes where words are taken as the source symbols are currently the most attractive choices to compress natural language text databases. In particular, Tagged Huffman Code by Moura et al. offers fast direct searching on the compressed text and random access capabilities, in exchange for producing around 11% larger compressed files. This work describes End-Tagged Dense Code and (s, c)-Dense Code, two new semistatic statistical methods for compressing natural language texts. These techniques permit simpler and faster encoding and obtain better compression ratios than Tagged Huffman Code, while maintaining its fast direct search and random access capabilities. We show that Dense Codes improve Tagged Huffman Code compression ratio by about 10%, reaching only 0.6% overhead over the optimal Huffman compression ratio. Being simpler, Dense Codes are generated 45% to 60% faster than Huffman codes. This makes Dense Codes a very attractive alternative to Huffman code variants for various reasons: they are simpler to program, faster to build, of almost optimal size, and as fast and easy to search as the best Huffman variants, which are not so close to the optimal size.
José R. ParamáEmail:
  相似文献   

5.
Inverted Index Compression Using Word-Aligned Binary Codes   总被引:3,自引:1,他引:3  
We examine index representation techniques for document-based inverted files, and present a mechanism for compressing them using word-aligned binary codes. The new approach allows extremely fast decoding of inverted lists during query processing, while providing compression rates better than other high-throughput representations. Results are given for several large text collections in support of these claims, both for compression effectiveness and query efficiency.  相似文献   

6.
Collaborative filtering is a popular recommendation technique. Although researchers have focused on the accuracy of the recommendations, real applications also need efficient algorithms. An index structure can be used to store the rating matrix and compute recommendations very fast. In this paper we study how compression techniques can reduce the size of this index structure and, at the same time, speed up recommendations. We show how coding techniques commonly used in Information Retrieval can be effectively applied to collaborative filtering, reducing the matrix size up to 75 %, and almost doubling the recommendation speed. Additionally, we propose a novel identifier reassignment technique, that achieves high compression rates, reducing by 40 % the size of an already compressed matrix. It is a very simple approach based on assigning the smallest identifiers to the items and users with the highest number of ratings, and it can be efficiently computed using a two pass indexing. The usage of the proposed compression techniques can significantly reduce the storage and time costs of recommender systems, which are two important factors in many real applications.  相似文献   

7.
Recent studies demonstrated that it is possible to reduce Inverted Files (IF) sizes by reassigning the document identifiers of the original collection, as this lowers the distance between the positions of documents related to a single term. Variable-bit encoding schemes can exploit the average gap reduction and decrease the total amount of bits per document pointer. This paper presents an efficient solution to the reassignment problem, which consists in reducing the input data dimensionality using a SVD transformation, as well as considering it a Travelling Salesman Problem (TSP). We also present some efficient solutions based on clustering. Finally, we combine both the TSP and the clustering strategies for reordering the document identifiers. We present experimental tests and performance results in two text TREC collections, obtaining good compression ratios with low running times, and advance the possibility of obtaining scalable solutions for web collections based on the techniques presented here.  相似文献   

8.
倒排文档是信息检索系统中最普遍使用的索引机制,而索引文件的压缩能大大提高检索速度和节约磁盘空间。倒排文件压缩的传统做法是文档(标识号)间距法(d-gaps)。然而,剧烈变化的间距值并不能被著名的前缀自由代码有效编码压缩。为了使间距值得到有效的压缩,本文设计了一个文档标识号重置法。模拟试验表明能更有效压缩d-gaps倒排文档。  相似文献   

9.
吴育芳  陆春华 《晋图学刊》2010,(3):34-36,49
本文在介绍了Web挖掘的基础上,重点分析了Web文本挖掘的概念、过程及其关键技术,包括文本的特征表示与提取、文本的分类与聚类等。  相似文献   

10.
一种基于Native XML的全文检索引擎   总被引:5,自引:0,他引:5  
王弘蔚  肖诗斌 《情报学报》2003,22(5):550-556
随着XML的日益流行 ,基于XML的全文检索应用需求也迅速扩大。在这些应用中 ,native XML数据库是发展方向。虽然商业化的native XML数据库已经出现 ,但其全文检索的性能还不尽人意。本文提出一种方法 :在传统的倒排索引的框架下 ,对XML的标记建立索引 ,使得一个全文数据库能够以Native的方式存储、索引、检索和输出XML文档 ,成为一个真正意义上的native XML全文数据库 ,既有传统全文数据库的优越性能 ,又能满足基于na tive XML的应用需求  相似文献   

11.
提出了汉字全文检索系统的新的数据结构、建库和检索的算法,完成了程序设计、用于对中国化学文献数据库标题和文摘的检索,测定了索引建立时间、空间消耗和检索的响应时间,计算了每篇文献的长度在不同范围时的高频字数和索引空间消耗,讨论了索引膨胀比与文献长度的关系  相似文献   

12.
Index maintenance strategies employed by dynamic text retrieval systems based on inverted files can be divided into two categories: merge-based and in-place update strategies. Within each category, individual update policies can be distinguished based on whether they store their on-disk posting lists in a contiguous or in a discontiguous fashion. Contiguous inverted lists, in general, lead to higher query performance, by minimizing the disk seek overhead at query time, while discontiguous inverted lists lead to higher update performance, requiring less effort during index maintenance operations. In this paper, we focus on retrieval systems with high query load, where the on-disk posting lists have to be stored in a contiguous fashion at all times. We discuss a combination of re-merge and in-place index update, called Hybrid Immediate Merge. The method performs strictly better than the re-merge baseline policy used in our experiments, as it leads to the same query performance, but substantially better update performance. The actual time savings achievable depend on the size of the text collection being indexed; a larger collection results in greater savings. In our experiments, variations of Hybrid Immediate Merge were able to reduce the total index update overhead by up to 73% compared to the re-merge baseline.
Stefan BüttcherEmail:
  相似文献   

13.
文本信息可视化模型研究   总被引:2,自引:0,他引:2  
周宁  张会平  金大卫 《情报学报》2007,26(1):155-160
本文针对文本信息资源的特征,提出了一个基于XML的文本信息可视化的通用模型,详细介绍了模型的三个对象空间——XML文档库、XML特征库和可视化对象以及三项关键技术——中文分词、文本分割和可视化映射,并结合实例验证了模型的实用性、易扩展性以及可移植性。  相似文献   

14.
基于句子的文本表示及中文文本分类研究   总被引:1,自引:0,他引:1  
文本挖掘技术是信息资源管理的一项关键技术.向量空间模型是文本挖掘中成熟的文本表示模型,通常以词语或短语作为特征项,但这些特征项只能提供较少的语义信息.为实现基于内容的文本挖掘,本文将文本切分粒度从词语或短语提高到句子,用句子包表示文本,使用句子相似度定义文本相似度,用KNN算法进行中文文本分类,验证模型的可行性.实验证明,基于句子包的KNN算法的平均精度(92.12%)和召回率(92.01%)是比较理想的.  相似文献   

15.
Locating and Recognizing Text in WWW Images   总被引:4,自引:0,他引:4  
The explosive growth of the World Wide Web has resulted in a distributed database consisting of hundreds of millions of documents. While existing search engines index a page based on the text that is readily extracted from its HTML encoding, an increasing amount of the information on the Web is embedded in images. This situation presents a new and exciting challenge for the fields of document analysis and information retrieval, as WWW image text is typically rendered in color and at very low spatial resolutions. In this paper, we survey the results of several years of our work in the area. For the problem of locating text in Web images, we describe a procedure based on clustering in color space followed by a connected-components analysis that seems promising. For character recognition, we discuss techniques using polynomial surface fitting and fuzzy n-tuple classifiers. Also presented are the results of several experiments that demonstrate where our methods perform well and where more work needs to be done. We conclude with a discussion of topics for further research.  相似文献   

16.
本文将潜在语义索引理论与支持向量机方法相结合,对文本向量各维与文本的语义联系进行特征抽取,建立了完整的基于潜在语义索引的支持向量机文本分类模型,分析了该方法与分词的维数以及SVM惩罚因子选择之间的关系.并在NN-SVM分类算法的基础上,通过计算样本点与其最近邻点类别的异同以及该点与其k个同类近邻点在核空间的平均距离来修剪混淆点,提出了一种改进的NN-SVM算法:KCNN-SVM算法.利用该算法对降维后的训练集进行修剪.实验表明,用新的模型进行文本分类,与单纯支持向量机相比,受到文本分词维数以及支持向量机惩罚因子的影响更小,其分类正确率更高.  相似文献   

17.
本文采用BORLANDIDAPI关系数据库集成技术,集成多种关系数据库系统,并用信息存储与检索软件QUICKIMS进行管理,实现对关系数据库的全文检索。对基于PC和基于SQL的关系数据库数据结构、数据访问方式、数据类型进行集成;对基本表和单库或多库查询的结果进行转移,生成QUICKIMS的必要文件和索引;对关系数据库提供布尔检索、前方一致检索、字段限定检索、相邻检索和位置检索等检索方式。采用动态转换关系数据库数据,减少了空间的浪费  相似文献   

18.
We consider a multi-stage retrieval architecture consisting of a fast, “cheap” candidate generation stage, a feature extraction stage, and a more “expensive” reranking stage using machine-learned models. In this context, feature extraction can be accomplished using a document vector index, a mapping from document ids to document representations. We consider alternative organizations of such a data structure for efficient feature extraction: design choices include how document terms are organized, how complex term proximity features are computed, and how these structures are compressed. In particular, we propose a novel document-adaptive hashing scheme for compactly encoding term ids. The impact of alternative designs on both feature extraction speed and memory footprint is experimentally evaluated. Overall, results show that our architecture is comparable in speed to using a traditional positional inverted index but requires less memory overall, and offers additional advantages in terms of flexibility.  相似文献   

19.
汉语文本结构的自动分析   总被引:5,自引:1,他引:4  
薛翠芳  郭炳炎 《情报学报》2000,19(4):319-325
本文试图运用向量空间模型来确定文本段落之间内容的相关性,从而实现文本主题的自动分析,找出构成文本大主题的各个小主题,从这些小主题入手来实现自动文摘,可为自动文摘技术探索一条新途径。另一方面,通过文本结构的自动分析,可确定文本结构的类型,也为全文检索等信息处理技术提供一些有用的信息。  相似文献   

20.
多层次web文本分类   总被引:8,自引:0,他引:8  
凌云  刘军  王勋 《情报学报》2005,24(6):684-689
传统的文本分类大多基于向量空间,分类体系为甲面体系,忽视了类别间的层次关系。根据LSA理论提出了一种多层次web文本分类方法。建立类模型时,根据类别的层次关系树由下到上逐层为具有相同父节点的类别建立一个类模型;分类时,由上到下,根据相应的类模型存LS空间上分类。这种分类方法解决了LSA模型中高维矩阵难以进行奇异值分解的问题。同时体现了web文本中词条的语义关系,注重了词条在网页中的表现形式。实验表明,多层次web文本分类方法比基于平面分类体系的分类方法在查全率和准确率方面要好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号