首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 188 毫秒
1.
Web search algorithms that rank Web pages by examining the link structure of the Web are attractive from both theoretical and practical aspects. Todays prevailing link-based ranking algorithms rank Web pages by using the dominant eigenvector of certain matrices—like the co-citation matrix or variations thereof. Recent analyses of ranking algorithms have focused attention on the case where the corresponding matrices are irreducible, thus avoiding singularities of reducible matrices. Consequently, rank analysis has been concentrated on authority connected graphs, which are graphs whose co-citation matrix is irreducible (after deleting zero rows and columns). Such graphs conceptually correspond to thematically related collections, in which most pages pertain to a single, dominant topic of interest.A link-based search algorithm A is rank-stable if minor changes in the link structure of the input graph, which is usually a subgraph of the Web, do not affect the ranking it produces; algorithms A,B are rank-similar if they produce similar rankings. These concepts were introduced and studied recently for various existing search algorithms.This paper studies the rank-stability and rank-similarity of three link-based ranking algorithms—PageRank, HITS and SALSA—in authority connected graphs. For this class of graphs, we show that neither HITS nor PageRank is rank stable. We then show that HITS and PageRank are not rank similar on this class, nor is any of them rank similar to SALSA.This research was supported by the Fund for the Promotion of Research at the Technion, and by the Barnard Elkin Chair in Computer Science.  相似文献   

2.
This paper is concerned with Markov processes for computing page importance. Page importance is a key factor in Web search. Many algorithms such as PageRank and its variations have been proposed for computing the quantity in different scenarios, using different data sources, and with different assumptions. Then a question arises, as to whether these algorithms can be explained in a unified way, and whether there is a general guideline to design new algorithms for new scenarios. In order to answer these questions, we introduce a General Markov Framework in this paper. Under the framework, a Web Markov Skeleton Process is used to model the random walk conducted by the web surfer on a given graph. Page importance is then defined as the product of two factors: page reachability, the average possibility that the surfer arrives at the page, and page utility, the average value that the page gives to the surfer in a single visit. These two factors can be computed as the stationary probability distribution of the corresponding embedded Markov chain and the mean staying time on each page of the Web Markov Skeleton Process respectively. We show that this general framework can cover many existing algorithms including PageRank, TrustRank, and BrowseRank as its special cases. We also show that the framework can help us design new algorithms to handle more complex problems, by constructing graphs from new data sources, employing new family members of the Web Markov Skeleton Process, and using new methods to estimate these two factors. In particular, we demonstrate the use of the framework with the exploitation of a new process, named Mirror Semi-Markov Process. In the new process, the staying time on a page, as a random variable, is assumed to be dependent on both the current page and its inlink pages. Our experimental results on both the user browsing graph and the mobile web graph validate that the Mirror Semi-Markov Process is more effective than previous models in several tasks, even when there are web spams and when the assumption on preferential attachment does not hold.  相似文献   

3.
一种基于网页分割的Web信息检索方法   总被引:2,自引:0,他引:2  
提出一种基于网页内容分割的Web信息检索算法。该算法根据网页半结构化的特点,按照HTML标记和网页的内容将网页进行区域分割。在建立HTML标记树的基础上,利用了的内容相似性和视觉相似性进行节点的整合。在检索和排序中,根据用户的查询,充分利用了区域信息来对相关的检索结果进行排序。  相似文献   

4.
We evaluate article-level metrics along two dimensions. Firstly, we analyse metrics’ ranking bias in terms of fields and time. Secondly, we evaluate their performance based on test data that consists of (1) papers that have won high-impact awards and (2) papers that have won prizes for outstanding quality. We consider different citation impact indicators and indirect ranking algorithms in combination with various normalisation approaches (mean-based, percentile-based, co-citation-based, and post hoc rescaling). We execute all experiments on two publication databases which use different field categorisation schemes (author-chosen concept categories and categories based on papers’ semantic information).In terms of bias, we find that citation counts are always less time biased but always more field biased compared to PageRank. Furthermore, rescaling paper scores by a constant number of similarly aged papers reduces time bias more effectively compared to normalising by calendar years. We also find that percentile citation scores are less field and time biased than mean-normalised citation counts.In terms of performance, we find that time-normalised metrics identify high-impact papers better shortly after their publication compared to their non-normalised variants. However, after 7 to 10 years, the non-normalised metrics perform better. A similar trend exists for the set of high-quality papers where these performance cross-over points occur after 5 to 10 years.Lastly, we also find that personalising PageRank with papers’ citation counts reduces time bias but increases field bias. Similarly, using papers’ associated journal impact factors to personalise PageRank increases its field bias. In terms of performance, PageRank should always be personalised with papers’ citation counts and time-rescaled for citation windows smaller than 7 to 10 years.  相似文献   

5.
针对多媒体链接在网页中分布的特点,对PageRank、Shark-Search两种典型的主题搜索算法进行相关参数的改进,采用改进后的两种算法从网页内容和网页网页的角度计算多媒体网页与主题的相似度。实验结果表明,改进的Shark-Search多媒体主题搜索算法比改进后的PageRank搜索算法更能有效地提高多媒体主题搜索的效率,同时也更适合网络多媒体资源的主题搜索。  相似文献   

6.
We evaluate author impact indicators and ranking algorithms on two publication databases using large test data sets of well-established researchers. The test data consists of (1) ACM fellowship and (2) various life-time achievement awards. We also evaluate different approaches of dividing credit of papers among co-authors and analyse the impact of self-citations. Furthermore, we evaluate different graph normalisation approaches for when PageRank is computed on author citation graphs.We find that PageRank outperforms citation counts in identifying well-established researchers. This holds true when PageRank is computed on author citation graphs but also when PageRank is computed on paper graphs and paper scores are divided among co-authors. In general, the best results are obtained when co-authors receive an equal share of a paper's score, independent of which impact indicator is used to compute paper scores. The results also show that removing author self-citations improves the results of most ranking metrics. Lastly, we find that it is more important to personalise the PageRank algorithm appropriately on the paper level than deciding whether to include or exclude self-citations. However, on the author level, we find that author graph normalisation is more important than personalisation.  相似文献   

7.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of university rankings have been proposed to quantify it, yet the debate on what rankings are exactly measuring is enduring.To address the issue we have measured a quantitative and reliable proxy of the academic reputation of a given institution and compared our findings with well-established impact indicators and academic rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Given the volume of the data analysed, our findings are statistically sound and less prone to bias, than, for instance, ad–hoc surveys often employed by ranking bodies in order to attain similar outcomes. The approach proposed in our paper may contribute to enhance ranking methodologies, by reconciling the qualitative evaluation of academic prestige with its quantitative measurements via publication impact.  相似文献   

8.
运用共词分析的方法,检索CNKI数据库中的链接分析领域论文,确定高频关键词,用Bicomb建立关键词共词矩阵,以SPSS为工具进行因子分析和聚类分析,探讨国内链接分析的研究现状与研究热点,发现应用于链接分析的方法主要有引文分析、共链分析、可视化、社会网络分析等,链接分析算法主要包括PageRank算法、HIST算法、网页排序等,应用研究集中于网络信息资源评价、网站的网络影响力评价和大学评价.  相似文献   

9.
Web spam pages exploit the biases of search engine algorithms to get higher than their deserved rankings in search results by using several types of spamming techniques. Many web spam demotion algorithms have been developed to combat spam via the use of the web link structure, from which the goodness or badness score of each web page is evaluated. Those scores are then used to identify spam pages or punish their rankings in search engine results. However, most of the published spam demotion algorithms differ from their base models by only very limited improvements and still suffer from some common score manipulation methods. The lack of a general framework for this field makes the task of designing high-performance spam demotion algorithms very inefficient. In this paper, we propose a unified score propagation model for web spam demotion algorithms by abstracting the score propagation process of relevant models with a forward score propagation function and a backward score propagation function, each of which can further be expressed as three sub-functions: a splitting function, an accepting function and a combination function. On the basis of the proposed model, we develop two new web spam demotion algorithms named Supervised Forward and Backward score Ranking (SFBR) and Unsupervised Forward and Backward score Ranking (UFBR). Our experiments, conducted on three large-scale public datasets, show that (1) SFBR is very robust and apparently outperforms other algorithms and (2) UFBR can obtain results comparable to some well-known supervised algorithms in the spam demotion task even if the UFBR is unsupervised.  相似文献   

10.
The problem of ranking is a crucial task in the web information retrieval systems. The dynamic nature of information resources as well as the continuous changes in the information demands of the users has made it very difficult to provide effective methods for data mining and document ranking. Regarding these challenges, in this paper an adaptive ranking algorithm is proposed named GPRank. This algorithm which is a function discovery framework, utilizes the relatively simple features of web documents to provide suitable rankings using a multi-layer/multi-population genetic programming architecture. Experiments done, illustrate that GPRank has better performance in comparison with well-known ranking techniques and also against its full mode edition.  相似文献   

11.
This study uses citation data and survey data for 55 library and information science journals to identify three factors underlying a set of 11 journal ranking metrics (six citation metrics and five stated preference metrics). The three factors—three composite rankings—represent (1) the citation impact of a typical article, (2) subjective reputation, and (3) the citation impact of the journal as a whole (all articles combined). Together, they account for 77% of the common variance within the set of 11 metrics. Older journals (those founded before 1953) and nonprofit journals tend to have high reputation scores relative to their citation impact. Unlike previous research, this investigation shows no clear evidence of a distinction between the journals of greatest importance to scholars and those of greatest importance to practitioners. Neither group's subjective journal rankings are closely related to citation impact.  相似文献   

12.
利用web文档的半结构化信息,提出一种基于DOM的web文本分割算法。该算法充分挖掘web网页中控制网页内容结构和显示的HTML标签信息,构建HTML DOM树。首先通过改进传统的平面文本分割方法,使之适用于web文本分割;然后利用DOM树中的节点平滑平面文本分割的结果,初步实验表明该算法能有效提高web文本分割的精确度。  相似文献   

13.
Most ranking algorithms are based on the optimization of some loss functions, such as the pairwise loss. However, these loss functions are often different from the criteria that are adopted to measure the quality of the web page ranking results. To overcome this problem, we propose an algorithm which aims at directly optimizing popular measures such as the Normalized Discounted Cumulative Gain and the Average Precision. The basic idea is to minimize a smooth approximation of these measures with gradient descent. Crucial to this kind of approach is the choice of the smoothing factor. We provide various theoretical analysis on that choice and propose an annealing algorithm to iteratively minimize a less and less smoothed approximation of the measure of interest. Results on the Letor benchmark datasets show that the proposed algorithm achieves state-of-the-art performances.  相似文献   

14.
搜索引擎的几种常用排序算法   总被引:14,自引:0,他引:14  
常璐  夏祖奇 《图书情报工作》2003,47(6):70-73,88
介绍几种比较著名的搜索引擎排序算法,分别是词频位置加权、Direct Hit、PageRank和竞价排名服务,并重点讨论影响它们的因素以及各自的优缺点,最后对它们进行简要的分析和比较。  相似文献   

15.
PageRank算法的原理简介   总被引:9,自引:0,他引:9  
在介绍PageRank算法基本思想、基本公式和计算实例的基础上,介绍如何利用PageR- ank算法提高网页PR的方法,最后指出PageRank算法存在的不足,并对其发展趋势进行分析。  相似文献   

16.
引文分析是传统文献计量学和科学计量学的一种独特研究方法。主要从网络链接分析研究、基于网页链接分析的搜索引擎排序算法研制和新型网络引文索引工具的编制等方面,分析论述引文分析方法在网络环境下的发展和应用,以期形成对引文分析方法及其价值的合理认知和评价。  相似文献   

17.
Search engine optimization, or the practice of designing a web site so that it rises to the top of the results page when users search for particular keywords or phrases, has become so prevalent on the modern web that it has a significant influence on Google search results. This article examines the techniques used by search engine optimization practitioners, the difference between “white hat” and “black hat” optimization tactics, and why it is important for library staff to understand these techniques and their impact on search engine results pages. It also looks at ways that library staff can help their users develop awareness of the factors that influence search results and how to better assess the quality and relevance of results listings.  相似文献   

18.
This paper is concerned with a framework to compute the importance of webpages by using real browsing behaviors of Web users. In contrast, many previous approaches like PageRank compute page importance through the use of the hyperlink graph of the Web. Recently, people have realized that the hyperlink graph is incomplete and inaccurate as a data source for determining page importance, and proposed using the real behaviors of Web users instead. In this paper, we propose a formal framework to compute page importance from user behavior data (which covers some previous works as special cases). First, we use a stochastic process to model the browsing behaviors of Web users. According to the analysis on hundreds of millions of real records of user behaviors, we justify that the process is actually a continuous-time time-homogeneous Markov process, and its stationary probability distribution can be used as the measure of page importance. Second, we propose a number of ways to estimate parameters of the stochastic process from real data, which result in a group of algorithms for page importance computation (all referred to as BrowseRank). Our experimental results have shown that the proposed algorithms can outperform the baseline methods such as PageRank and TrustRank in several tasks, demonstrating the advantage of using our proposed framework.  相似文献   

19.
Web 2.0应用的兴起,推进了情报学科由"文献组织"向"知识组织"演化.网页标签作为重要的Web 2 0应用之一,已经成为大众组织知识的常用途径.然而,现有的标签排序方法难以有效满足知识组织的需求.本文在三核协同标签模型的基础上,充分考虑标签和用户、标签和标签、标签和文档之间的关系,提出了一种结合HITS和随机跳转的标签排序方法.该方法利用高质量标签和高质量用户之间的相互加强关系,根据标签之间的相似性来找出高质量相关标签,有效提高标签排序的质量.在Delicious数据集上的实验结果表明,该方法能较大提高标签排序的准确度.  相似文献   

20.
一种基于源网页质量的锚文本相似度计算方法--LAAT   总被引:8,自引:0,他引:8  
陆一鸣  胡健  马范援 《情报学报》2005,24(5):548-554
锚文本作为对目标网页的描述,往往分布在不同的源网页上,质量也参差不齐。本文利用了超链接分析算法的成果,提出一种基于源网页质量的锚文本相似度计算方法——LAAT(Link Aid Anchor Text)。实验表明,利用源网页质量能够有效地综合各源网页上的锚文本组成,从而能够提高检索性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号