首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Search sessions consist of a person presenting a query to a search engine, followed by that person examining the search results, selecting some of those search results for further review, possibly following some series of hyperlinks, and perhaps backtracking to previously viewed pages in the session. The series of pages selected for viewing in a search session, sometimes called the click data, is intuitively a source of relevance feedback information to the search engine. We are interested in how that relevance feedback can be used to improve the search results quality for all users, not just the current user. For example, the search engine could learn which documents are frequently visited when certain search queries are given.  相似文献   

2.
Topic distillation is one of the main information needs when users search the Web. Previous approaches for topic distillation treat single page as the basic searching unit, which has not fully utilized the structure information of the Web. In this paper, we propose a novel concept for topic distillation, named sub-site retrieval, in which the basic searching unit is sub-site instead of single page. A sub-site is the subset of a website, consisting of a structural collection of pages. The key of sub-site retrieval includes (1) extracting effective features for the representation of a sub-site using both the content and structure information, (2) delivering the sub-site-based retrieval results with a friendly and informative user interface. For the first point, we propose Punished Integration algorithm, which is based on the modeling of the growth of websites. For the second point, we design a user interface to better illustrate the search results of sub-site retrieval. Testing on the topic distillation task of TREC 2003 and 2004, sub-site retrieval leads to significant improvement of retrieval performance over the previous methods based on single pages. Furthermore, time complexity analysis shows that sub-site retrieval can be integrated into the index component of search engines.  相似文献   

3.
The goal of the study presented in this article is to investigate to what extent the classification of a web page by a single genre matches the users’ perspective. The extent of agreement on a single genre label for a web page can help understand whether there is a need for a different classification scheme that overrides the single-genre labelling. My hypothesis is that a single genre label does not account for the users’ perspective. In order to test this hypothesis, I submitted a restricted number of web pages (25 web pages) to a large number of web users (135 subjects) asking them to assign only a single genre label to each of the web pages. Users could choose from a list of 21 genre labels, or select one of the two ‘escape’ options, i.e. ‘Add a label’ and ‘I don’t know’. The rationale was to observe the level of agreement on a single genre label per web page, and draw some conclusions about the appropriateness of limiting the assignment to only a single label when doing genre classification of web pages. Results show that users largely disagree on the label to be assigned to a web page.  相似文献   

4.
数据挖掘就是从大量的数据中发现隐含的规律性的内容。本文从Web数据挖掘方面入手,对网站优化的个性化推荐方法进行了较为系统地研究,并且通过采用适当的关联规则,对用户所浏览网页之间的关联性进行了分析,最后对个性化推荐服务的性能进行了验证。  相似文献   

5.
当前,浏览器的用户体验被提到了前所未有的高度,网页前端技术问题越来越为人所关注。CSS选择符的编写方式决定了浏览器必须执行的匹配次数,若不对其深入理解,可能会写出十分低效的样式规则,严重影响网页性能。重点关注CSS选择符的匹配方式,希望对广大网页前端技术人员提供一些参考。  相似文献   

6.
本文论述了Web用户访问模式挖掘中的数据预处理,主要提出了数据预处理中如何识别会话的一种改进算法。该方法通过使用三个因素来构造会话:①根据先验知识,确定会话时间阈值识别会话;②根据页面访问时间统计分布,确定相邻网页访问时间间隔阈值识别会话;③页面内容及站点结构确定页面重要程度识别会话。实验结果表明,相对于传统的单一方法进行会话识别的方法,该方法能够准确的识别会话,更为合理有效。  相似文献   

7.
Session-based recommendation aims to predict items that a user will interact with based on historical behaviors in anonymous sessions. It has long faced two challenges: (1) the dynamic change of user intents which makes user preferences towards items change over time; (2) the uncertainty of user behaviors which adds noise to hinder precise preference learning. They jointly preclude recommender system from capturing real intents of users. Existing methods have not properly solved these problems since they either ignore many useful factors like the temporal information when building item embeddings, or do not explicitly filter out noisy clicks in sessions. To tackle above issues, we propose a novel Dynamic Intent-aware Iterative Denoising Network (DIDN) for session-based recommendation. Specifically, to model the dynamic intents of users, we present a dynamic intent-aware module that incorporates item-aware, user-aware and temporal-aware information to learn dynamic item embeddings. A novel iterative denoising module is then devised to explicitly filter out noisy clicks within a session. In addition, we mine collaborative information to further enrich the session semantics. Extensive experimental results on three real-world datasets demonstrate the effectiveness of the proposed DIDN. Specifically, DIDN obtains improvements over the best baselines by 1.66%, 1.75%, and 7.76% in terms of P@20 and 1.70%, 2.20%, and 10.48% in terms of MRR@20 on all datasets.  相似文献   

8.
Broken hypertext links are a frequent problem in the Web. Sometimes the page which a link points to has disappeared forever, but in many other cases the page has simply been moved to another location in the same web site or to another one. In some cases the page besides being moved, is updated, becoming a bit different to the original one but rather similar. In all these cases it can be very useful to have a tool that provides us with pages highly related to the broken link, since we could select the most appropriate one. The relationship between the broken link and its possible linkable pages, can be defined as a function of many factors. In this work we have employed several resources both in the context of the link and in the Web to look for pages related to a broken link. From the resources in the context of a link, we have analyzed several sources of information such as the anchor text, the text surrounding the anchor, the URL and the page containing the link. We have also extracted information about a link from the Web infrastructure such as search engines, Internet archives and social tagging systems. We have combined all of these resources to design a system that recommends pages that can be used to recover the broken link. A novel methodology is presented to evaluate the system without resorting to user judgments, thus increasing the objectivity of the results, and helping to adjust the parameters of the algorithm. We have also compiled a web page collection with true broken links, which has been used to test the full system by humans.  相似文献   

9.
随着网络的飞速发展,网页数量急剧膨胀,近几年来更是以指数级进行增长,搜索引擎面临的挑战越来越严峻,很难从海量的网页中准确快捷地找到符合用户需求的网页。网页分类是解决这个问题的有效手段之一,基于网页主题分类和基于网页体裁分类是网页分类的两大主流,二者有效地提高了搜索引擎的检索效率。网页体裁分类是指按照网页的表现形式及其用途对网页进行分类。介绍了网页体裁的定义,网页体裁分类研究常用的分类特征,并且介绍了几种常用特征筛选方法、分类模型以及分类器的评估方法,为研究者提供了对网页体裁分类的概要性了解。  相似文献   

10.
A fast and efficient page ranking mechanism for web crawling and retrieval remains as a challenging issue. Recently, several link based ranking algorithms like PageRank, HITS and OPIC have been proposed. In this paper, we propose a novel recursive method based on reinforcement learning which considers distance between pages as punishment, called “DistanceRank” to compute ranks of web pages. The distance is defined as the number of “average clicks” between two pages. The objective is to minimize punishment or distance so that a page with less distance to have a higher rank. Experimental results indicate that DistanceRank outperforms other ranking algorithms in page ranking and crawling scheduling. Furthermore, the complexity of DistanceRank is low. We have used University of California at Berkeley’s web for our experiments.  相似文献   

11.
Pre-adoption expectations often serve as an implicit reference point in users’ evaluation of information systems and are closely associated with their goals of interactions, behaviors, and overall satisfaction. Despite the empirically confirmed impacts, users’ search expectations and their connections to tasks, users, search experiences, and behaviors have been scarcely studied in the context of online information search. To address the gap, we collected 116 sessions from 60 participants in a controlled-lab Web search study and gathered direct feedback on their in-situ expected information gains (e.g., number of useful pages) and expected search efforts (e.g., clicks and dwell time) under each query during search sessions. Our study aims to examine (1) how users’ pre-search experience, task characteristics, and in-session experience affect their current expectations and (2) how user expectations are correlated with search behaviors and satisfaction. Our results with both quantitative and qualitative evidence demonstrate that: (1) user expectation is significantly affected by task characteristics, previous and in-situ search experience; (2) user expectation is closely associated with users’ browsing behaviors and search satisfaction. The knowledge learned about user expectation advances our understanding of users’ search behavioral patterns and their evaluations of interaction experience and will also facilitate the design, implementation, and evaluation of expectation-aware user models, metrics, and information retrieval (IR) systems.  相似文献   

12.
13.
This research is a part of ongoing study to better understand citation analysis on the Web. It builds on Kleinberg's research (J. Kleinberg, R. Kumar, P. Raghavan, P. Rajagopalan, A. Tomkins, Invited survey at the International Conference on Combinatorics and Computing, 1999) that hyperlinks between web pages constitute a web graph structure and tries to classify different web graphs in the new coordinate space: out-degree, in-degree. The out-degree coordinate is defined as the number of outgoing web pages from a given web page. The in-degree coordinate is the number of web pages that point to a given web page. In this new coordinate space a metric is built to classify how close or far are different web graphs. Kleinberg's web algorithm (J. Kleinberg, Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, 1998, pp. 668–677) on discovering “hub web pages” and “authorities web pages” is applied in this new coordinate space. Some very uncommon phenomenon has been discovered and new interesting results interpreted. This study does not look at enhancing web retrieval by adding context information. It only considers web hyperlinks as a source to analyze citations on the web. The author believes that understanding the underlying web page as a graph will help design better web algorithms, enhance retrieval and web performance, and recommends using graphs as a part of visual aid for search engine designers.  相似文献   

14.
The authors investigate the frequency distribution of the use of image tags in Web pages. Using data sampled from top level Web pages across five top level domains and from sample pages within individual websites, the authors model observed patterns in the frequency of image tag usage by fitting collected data distributions to different theoretical models used in informetrics. Models tested include the modified power law (MPL), Mandelbrot (MDB), generalized waring (GW), generalized inverse Gaussian–Poisson (GIGP), and generalized negative binomial (GNB) distributions. The GIGP provided the best fit for data sets for top level pages across the top level domains tested. The poor fits of the models to the observed data distributions from specific websites were due to the multimodal nature of the observed data sets. Mixtures of the tested models for the data sets provided better fits. The ability to effectively model Web page attributes, such as the distribution of the number of image tags used per page, is needed for accurate simulation models of Web page content, and makes it possible to estimate the number of requests needed to display the complete content of Web pages.  相似文献   

15.
以净化网页、提取网页主题内容为目标,提出一个基于网页规划布局的网页主题内容抽取算法。该算法依据原始网页的规划布局,通过构造标签树为网页分块分类,进而通过计算内容块的主题相关度,辨别网页主题,剔除不相关信息,提取网页主题内容。实验表明,算法适用于主题型网页的“去噪”及内容提取,具体应用中有较理想的表现。  相似文献   

16.
针对传统的基于Web图的垂直搜索策略Authorities and Hubs,提出了一种融合了网页内容评价和Web图的启发式垂直搜索策略,此外,引入向量空间模型进行针对网页内容的主题相关度判断,进一步提高主题网页下载的准确率.实验表明,文中算法有效地提高了主题网页的聚合程度,且随着网页下载数量的增加,垂直搜索引擎的准确率逐渐递增,并在下载网页达到一定数量后,准确率趋于稳定,算法具有较好的鲁棒性,可以应用到相关垂直搜索引擎系统中.  相似文献   

17.
18.
用户兴趣本体弥补了基于关键词的用户兴趣模型不能从语义上表达用户兴趣的缺陷,但大多是利用领域本体来构建,很难反映用户多方面和潜在兴趣,并且构建领域本体也是一个难点。本文据此提出一种基于词汇同现的用户兴趣本体构建方法。根据网页浏览记录找到用户兴趣网页集,经过数据处理将其转换成用户兴趣文本集。以TFIDF为指标抽取概念,词汇同现统计提取概念间关系,运用无尺度K-中心点聚类算法对其调整,将有关联用户的本体合并得到多用户本体,该本体能在语义上更全面反映用户兴趣并发现潜在兴趣。  相似文献   

19.
Although brand pages on social media platforms are burgeoning, companies frequently have difficulty in sustaining customer relationships on their brand pages. Consequently, this study focuses on how a social media brand page develops customer commitment and encourages them to perceive that future conflicts with the company can be resolved for their mutual benefit. On the basis of a review of the literature on customer value theory and commitment, this study develops an integrative model that explores the antecedents of functional conflict and the boundary condition under which brand page commitment enhances functional conflict. The model is tested using data collected from 293 followers of brand pages on a social networking site. The results demonstrate the salient roles of customer values and commitment in determining customer perceptions of future conflicts. By shifting scholarly attention from economic outcomes characterized by purchase intention to relationship outcomes characterized by functional conflict, the findings contribute to the research of the business implications of social networking sites.  相似文献   

20.
In the whole world, the internet is exercised by millions of people every day for information retrieval. Even for a small to smaller task like fixing a fan, to cook food or even to iron clothes persons opt to search the web. To fulfill the information needs of people, there are billions of web pages, each having a different degree of relevance to the topic of interest (TOI), scattered throughout the web but this huge size makes manual information retrieval impossible. The page ranking algorithm is an integral part of search engines as it arranges web pages associated with a queried TOI in order of their relevance level. It, therefore, plays an important role in regulating the search quality and user experience for information retrieval. PageRank, HITS, and SALSA are well-known page ranking algorithm based on link structure analysis of a seed set, but ranking given by them has not yet been efficient. In this paper, we propose a variant of SALSA to give sNorm(p) for the efficient ranking of web pages. Our approach relies on a p-Norm from Vector Norm family in a novel way for the ranking of web pages as Vector Norms can reduce the impact of low authority weight in hub weight calculation in an efficient way. Our study, then compares the rankings given by PageRank, HITS, SALSA, and sNorm(p) to the same pages in the same query. The effectiveness of the proposed approach over state of the art methods has been shown using performance measurement technique, Mean Reciprocal Rank (MRR), Precision, Mean Average Precision (MAP), Discounted Cumulative Gain (DCG) and Normalized DCG (NDCG). The experimentation is performed on a dataset acquired after pre-processing of the results collected from initial few pages retrieved for a query by the Google search engine. Based on the type and amount of in-hand domain expertise 30 queries are designed. The extensive evaluation and result analysis are performed using MRR, [email protected], MAP, DCG, and NDCG as the performance measuring statistical metrics. Furthermore, results are statistically verified using a significance test. Findings show that our approach outperforms state of the art methods by attaining 0.8666 as MRR value, 0.7957 as MAP value. Thus contributing to the improvement in the ranking of web pages more efficiently as compared to its counterparts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号