首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 312 毫秒
1.
分析了构建面向Web网页内容特征库的必要性,提出了其构建原则,介绍了网页特征库的结构和数据搜集方法.并应用到基于语义理解的智能搜索引擎中,同时对特征库的特征词进行了测试。  相似文献   

2.
常用的网页分类技术大多基于普通文本分类方法,没有充分考虑到网页分类的特殊性——网页本身的半结构化特征以及网页中存在大量干扰分类的噪音信息,同时多数网页分类的测试集和训练集来源于同一个样本集而忽视了测试集中可能包含无类别样本的可能。基于向量空间模型,将样本集看成由有类别样本和无类别样本两部分组成,同时选择了样本集来自于相同的网站,在去除网页噪音基础上结合文本相似度算法和最优截尾法,提出了一种基于不完整数据集的网页分类技术LUD(Learning by Unlabeled Data)来改善分类效果,提高分类精度。实验证明:LUD算法与传统的分类方法相比较而言,不但可以提高已有类别样本的分类精度,更主要的是提供了一种发现新类别样本的方法。  相似文献   

3.
本文研究了基于互信息、相关性的特征选择方法,并介入网页页面中超链接因素,对特征提取中互信息计算公式作了改进一引入超链接因子。实验表明,改进之后比之以往的简单的基于互信息方法进行特征选择的网页分类精度有一定的良高。  相似文献   

4.
孙静  赵恒永 《中国科技信息》2007,(11):138-139,141
文章介绍了搜索引擎网页快照系统的实现以及在安全性能方面的研究。当前的多数搜索引擎网站提供的网页快照,能够使用户更加快速和方便地访问较早时期的网页,但它们并没有对其中的网页安全进行判断。文章中的网页快照系统在实现网页快照的同时,通过建立网页脚本语言学习解释器,运用机器学习技术、词法分析技术等对网页上可能存在的不安全代码进行判断和去除,从而保证提供给用户的是尽量安全的网页快照。  相似文献   

5.
李建军  宋志章 《科技通报》2012,28(6):152-154
网页文本特征数常高达上万个,无用和冗余特征相当多,为提高网页文本分类精度,提出一种混合智能算法的网页文本分类方法。首先采用遗传算法对网页文本特征初步选择,然后采用蚁群算法对初步选择特征进行精细选择,最后采用K近邻算法建立文本分类器。结果表明,混合智能算法很好消除无用和冗余特征,提高了网页文本分类的精度,加快分类速度。  相似文献   

6.
网页搜索引擎的基础是基于关键字的索引,而将数据挖掘用于网页分类则是对基于关键字索引的一个有力的补充。数据挖掘可以帮助网页搜索引擎发现更高质量的网页,并且提高网页点击流的分析质量。  相似文献   

7.
朱学芳  冯曦曦 《情报科学》2012,(7):1012-1015
通过对农业网页的HTML结构和特征研究,叙述基于文本内容的农业网页信息抽取和分类实验研究过程。实验中利用DOM结构对农业网页信息进行信息抽取和预处理,并根据文本的内容自动计算文本类别属性,得到特征词,通过总结样本文档的特征,对遇到的新文档进行自动分类。实验结果表明,本文信息提取的时间复杂度比较小、精确度高,提高了分类的正确率。  相似文献   

8.
介绍了一种常用的文件类型HTML文件的文本信息预处理方法,该方法能够快速提取网页文本。实验表明,该预处理方法具有较好的分类效果。  相似文献   

9.
介绍了网络监控系统的概念,并根据实践需要提出了一种适用于网络监控系统的网页分类技术。该网页分类技术是基于网站本身所具有的结构性,并通过URL充分表现这一特点提出来的。与传统的基于数据挖掘技术的网页分类技术有本质区别。该技术着重于实用性,实现算法只需要少量的计算机资源,是适合网络监控系统的一种网页分类技术。  相似文献   

10.
基于语义理解的智能搜索引擎的研究   总被引:7,自引:0,他引:7  
曹二堂  刘玉林 《情报杂志》2005,24(6):58-59,63
通过对查询短语的结构分析,认为查询短语通常由关键字和特征词构成。特征词是对网页内容的概括,它预示着网页中包含一组特定的特征词条。基于此思想建立了面向Web网页内容的特征库,研究了以Web网页内容特征库为基础实现对查询短语进行语义理解的方法,提出了相关度级别的算法,对库中已收入的特征词进行了查询测试.查准率为86.7%。实验表明,该方法基本实现了对查询短语的理解,对提高搜索引擎的查准率有显著的效果。  相似文献   

11.
The goal of the study presented in this article is to investigate to what extent the classification of a web page by a single genre matches the users’ perspective. The extent of agreement on a single genre label for a web page can help understand whether there is a need for a different classification scheme that overrides the single-genre labelling. My hypothesis is that a single genre label does not account for the users’ perspective. In order to test this hypothesis, I submitted a restricted number of web pages (25 web pages) to a large number of web users (135 subjects) asking them to assign only a single genre label to each of the web pages. Users could choose from a list of 21 genre labels, or select one of the two ‘escape’ options, i.e. ‘Add a label’ and ‘I don’t know’. The rationale was to observe the level of agreement on a single genre label per web page, and draw some conclusions about the appropriateness of limiting the assignment to only a single label when doing genre classification of web pages. Results show that users largely disagree on the label to be assigned to a web page.  相似文献   

12.
网络蜘蛛搜索策略的研究是近年来专业搜索引擎研究的焦点之一,如何使搜索引擎快速准确地从庞大的网页数据中获取所需资源的需求是目前所面临的重要问题。重点阐述了搜索引擎的Web Spider(网络蜘蛛)的搜索策略和搜索优化措施,提出了一种简单的基于广度优先算法的网络蜘蛛设计方案,并分析了设计过程中的优化措施。  相似文献   

13.
Broken hypertext links are a frequent problem in the Web. Sometimes the page which a link points to has disappeared forever, but in many other cases the page has simply been moved to another location in the same web site or to another one. In some cases the page besides being moved, is updated, becoming a bit different to the original one but rather similar. In all these cases it can be very useful to have a tool that provides us with pages highly related to the broken link, since we could select the most appropriate one. The relationship between the broken link and its possible linkable pages, can be defined as a function of many factors. In this work we have employed several resources both in the context of the link and in the Web to look for pages related to a broken link. From the resources in the context of a link, we have analyzed several sources of information such as the anchor text, the text surrounding the anchor, the URL and the page containing the link. We have also extracted information about a link from the Web infrastructure such as search engines, Internet archives and social tagging systems. We have combined all of these resources to design a system that recommends pages that can be used to recover the broken link. A novel methodology is presented to evaluate the system without resorting to user judgments, thus increasing the objectivity of the results, and helping to adjust the parameters of the algorithm. We have also compiled a web page collection with true broken links, which has been used to test the full system by humans.  相似文献   

14.
随着互联网的快速发展,恶意网页所造成的危害也越来越大。对典型恶意网页进行了分析与分类,通过对现有的恶意网页检测技术的比较分类,分析了各种检测技术的优缺点。  相似文献   

15.
In the whole world, the internet is exercised by millions of people every day for information retrieval. Even for a small to smaller task like fixing a fan, to cook food or even to iron clothes persons opt to search the web. To fulfill the information needs of people, there are billions of web pages, each having a different degree of relevance to the topic of interest (TOI), scattered throughout the web but this huge size makes manual information retrieval impossible. The page ranking algorithm is an integral part of search engines as it arranges web pages associated with a queried TOI in order of their relevance level. It, therefore, plays an important role in regulating the search quality and user experience for information retrieval. PageRank, HITS, and SALSA are well-known page ranking algorithm based on link structure analysis of a seed set, but ranking given by them has not yet been efficient. In this paper, we propose a variant of SALSA to give sNorm(p) for the efficient ranking of web pages. Our approach relies on a p-Norm from Vector Norm family in a novel way for the ranking of web pages as Vector Norms can reduce the impact of low authority weight in hub weight calculation in an efficient way. Our study, then compares the rankings given by PageRank, HITS, SALSA, and sNorm(p) to the same pages in the same query. The effectiveness of the proposed approach over state of the art methods has been shown using performance measurement technique, Mean Reciprocal Rank (MRR), Precision, Mean Average Precision (MAP), Discounted Cumulative Gain (DCG) and Normalized DCG (NDCG). The experimentation is performed on a dataset acquired after pre-processing of the results collected from initial few pages retrieved for a query by the Google search engine. Based on the type and amount of in-hand domain expertise 30 queries are designed. The extensive evaluation and result analysis are performed using MRR, [email protected], MAP, DCG, and NDCG as the performance measuring statistical metrics. Furthermore, results are statistically verified using a significance test. Findings show that our approach outperforms state of the art methods by attaining 0.8666 as MRR value, 0.7957 as MAP value. Thus contributing to the improvement in the ranking of web pages more efficiently as compared to its counterparts.  相似文献   

16.
The Web and especially major Web search engines are essential tools in the quest to locate online information for many people. This paper reports results from research that examines characteristics and changes in Web searching from nine studies of five Web search engines based in the US and Europe. We compare interactions occurring between users and Web search engines from the perspectives of session length, query length, query complexity, and content viewed among the Web search engines. The results of our research shows (1) users are viewing fewer result pages, (2) searchers on US-based Web search engines use more query operators than searchers on European-based search engines, (3) there are statistically significant differences in the use of Boolean operators and result pages viewed, and (4) one cannot necessary apply results from studies of one particular Web search engine to another Web search engine. The wide spread use of Web search engines, employment of simple queries, and decreased viewing of result pages may have resulted from algorithmic enhancements by Web search engine companies. We discuss the implications of the findings for the development of Web search engines and design of online content.  相似文献   

17.
Search Engine for South-East Europe (SE4SEE) is a socio-cultural search engine running on the grid infrastructure. It offers a personalized, on-demand, country-specific, category-based Web search facility. The main goal of SE4SEE is to attack the page freshness problem by performing the search on the original pages residing on the Web, rather than on the previously fetched copies as done in the traditional search engines. SE4SEE also aims to obtain high download rates in Web crawling by making use of the geographically distributed nature of the grid. In this work, we present the architectural design issues and implementation details of this search engine. We conduct various experiments to illustrate performance results obtained on a grid infrastructure and justify the use of the search strategy employed in SE4SEE.  相似文献   

18.
This research is a part of ongoing study to better understand citation analysis on the Web. It builds on Kleinberg's research (J. Kleinberg, R. Kumar, P. Raghavan, P. Rajagopalan, A. Tomkins, Invited survey at the International Conference on Combinatorics and Computing, 1999) that hyperlinks between web pages constitute a web graph structure and tries to classify different web graphs in the new coordinate space: out-degree, in-degree. The out-degree coordinate is defined as the number of outgoing web pages from a given web page. The in-degree coordinate is the number of web pages that point to a given web page. In this new coordinate space a metric is built to classify how close or far are different web graphs. Kleinberg's web algorithm (J. Kleinberg, Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, 1998, pp. 668–677) on discovering “hub web pages” and “authorities web pages” is applied in this new coordinate space. Some very uncommon phenomenon has been discovered and new interesting results interpreted. This study does not look at enhancing web retrieval by adding context information. It only considers web hyperlinks as a source to analyze citations on the web. The author believes that understanding the underlying web page as a graph will help design better web algorithms, enhance retrieval and web performance, and recommends using graphs as a part of visual aid for search engine designers.  相似文献   

19.
A fast and efficient page ranking mechanism for web crawling and retrieval remains as a challenging issue. Recently, several link based ranking algorithms like PageRank, HITS and OPIC have been proposed. In this paper, we propose a novel recursive method based on reinforcement learning which considers distance between pages as punishment, called “DistanceRank” to compute ranks of web pages. The distance is defined as the number of “average clicks” between two pages. The objective is to minimize punishment or distance so that a page with less distance to have a higher rank. Experimental results indicate that DistanceRank outperforms other ranking algorithms in page ranking and crawling scheduling. Furthermore, the complexity of DistanceRank is low. We have used University of California at Berkeley’s web for our experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号