首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 377 毫秒
1.
We evaluate article-level metrics along two dimensions. Firstly, we analyse metrics’ ranking bias in terms of fields and time. Secondly, we evaluate their performance based on test data that consists of (1) papers that have won high-impact awards and (2) papers that have won prizes for outstanding quality. We consider different citation impact indicators and indirect ranking algorithms in combination with various normalisation approaches (mean-based, percentile-based, co-citation-based, and post hoc rescaling). We execute all experiments on two publication databases which use different field categorisation schemes (author-chosen concept categories and categories based on papers’ semantic information).In terms of bias, we find that citation counts are always less time biased but always more field biased compared to PageRank. Furthermore, rescaling paper scores by a constant number of similarly aged papers reduces time bias more effectively compared to normalising by calendar years. We also find that percentile citation scores are less field and time biased than mean-normalised citation counts.In terms of performance, we find that time-normalised metrics identify high-impact papers better shortly after their publication compared to their non-normalised variants. However, after 7 to 10 years, the non-normalised metrics perform better. A similar trend exists for the set of high-quality papers where these performance cross-over points occur after 5 to 10 years.Lastly, we also find that personalising PageRank with papers’ citation counts reduces time bias but increases field bias. Similarly, using papers’ associated journal impact factors to personalise PageRank increases its field bias. In terms of performance, PageRank should always be personalised with papers’ citation counts and time-rescaled for citation windows smaller than 7 to 10 years.  相似文献   

2.
Despite the increasing use of citation-based metrics for research evaluation purposes, we do not know yet which metrics best deliver on their promise to gauge the significance of a scientific paper or a patent. We assess 17 network-based metrics by their ability to identify milestone papers and patents in three large citation datasets. We find that traditional information-retrieval evaluation metrics are strongly affected by the interplay between the age distribution of the milestone items and age biases of the evaluated metrics. Outcomes of these metrics are therefore not representative of the metrics’ ranking ability. We argue in favor of a modified evaluation procedure that explicitly penalizes biased metrics and allows us to reveal metrics’ performance patterns that are consistent across the datasets. PageRank and LeaderRank turn out to be the best-performing ranking metrics when their age bias is suppressed by a simple transformation of the scores that they produce, whereas other popular metrics, including citation count, HITS and Collective Influence, produce significantly worse ranking results.  相似文献   

3.
We evaluate author impact indicators and ranking algorithms on two publication databases using large test data sets of well-established researchers. The test data consists of (1) ACM fellowship and (2) various life-time achievement awards. We also evaluate different approaches of dividing credit of papers among co-authors and analyse the impact of self-citations. Furthermore, we evaluate different graph normalisation approaches for when PageRank is computed on author citation graphs.We find that PageRank outperforms citation counts in identifying well-established researchers. This holds true when PageRank is computed on author citation graphs but also when PageRank is computed on paper graphs and paper scores are divided among co-authors. In general, the best results are obtained when co-authors receive an equal share of a paper's score, independent of which impact indicator is used to compute paper scores. The results also show that removing author self-citations improves the results of most ranking metrics. Lastly, we find that it is more important to personalise the PageRank algorithm appropriately on the paper level than deciding whether to include or exclude self-citations. However, on the author level, we find that author graph normalisation is more important than personalisation.  相似文献   

4.
This study uses citation data and survey data for 55 library and information science journals to identify three factors underlying a set of 11 journal ranking metrics (six citation metrics and five stated preference metrics). The three factors—three composite rankings—represent (1) the citation impact of a typical article, (2) subjective reputation, and (3) the citation impact of the journal as a whole (all articles combined). Together, they account for 77% of the common variance within the set of 11 metrics. Older journals (those founded before 1953) and nonprofit journals tend to have high reputation scores relative to their citation impact. Unlike previous research, this investigation shows no clear evidence of a distinction between the journals of greatest importance to scholars and those of greatest importance to practitioners. Neither group's subjective journal rankings are closely related to citation impact.  相似文献   

5.
For comparisons of citation impacts across fields and over time, bibliometricians normalize the observed citation counts with reference to an expected citation value. Percentile-based approaches have been proposed as a non-parametric alternative to parametric central-tendency statistics. Percentiles are based on an ordered set of citation counts in a reference set, whereby the fraction of papers at or below the citation counts of a focal paper is used as an indicator for its relative citation impact in the set. In this study, we pursue two related objectives: (1) although different percentile-based approaches have been developed, an approach is hitherto missing that satisfies a number of criteria such as scaling of the percentile ranks from zero (all other papers perform better) to 100 (all other papers perform worse), and solving the problem with tied citation ranks unambiguously. We introduce a new citation-rank approach having these properties, namely P100; (2) we compare the reliability of P100 empirically with other percentile-based approaches, such as the approaches developed by the SCImago group, the Centre for Science and Technology Studies (CWTS), and Thomson Reuters (InCites), using all papers published in 1980 in Thomson Reuters Web of Science (WoS). How accurately can the different approaches predict the long-term citation impact in 2010 (in year 31) using citation impact measured in previous time windows (years 1–30)? The comparison of the approaches shows that the method used by InCites overestimates citation impact (because of using the highest percentile rank when papers are assigned to more than a single subject category) whereas the SCImago indicator shows higher power in predicting the long-term citation impact on the basis of citation rates in early years. Since the results show a disadvantage in this predictive ability for P100 against the other approaches, there is still room for further improvements.  相似文献   

6.
This paper explores a possible approach to a research evaluation, by calculating the renown of authors of scientific papers. The evaluation is based on the citation analysis and its results should be close to a human viewpoint. The PageRank algorithm and its modifications were used for the evaluation of various types of citation networks. Our main research question was whether better evaluation results were based directly on an author network or on a publication network. Other issues concerned, for example, the determination of weights in the author network and the distribution of publication scores among their authors. The citation networks were extracted from the computer science domain in the ISI Web of Science database. The influence of self-citations was also explored. To find the best network for a research evaluation, the outputs of PageRank were compared with lists of prestigious awards in computer science such as the Turing and Codd award, ISI Highly Cited and ACM Fellows. Our experiments proved that the best ranking of authors was obtained by using a publication citation network from which self-citations were eliminated, and by distributing the same proportional parts of the publications’ values to their authors. The ranking can be used as a criterion for the financial support of research teams, for identifying leaders of such teams, etc.  相似文献   

7.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of university rankings have been proposed to quantify it, yet the debate on what rankings are exactly measuring is enduring.To address the issue we have measured a quantitative and reliable proxy of the academic reputation of a given institution and compared our findings with well-established impact indicators and academic rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Given the volume of the data analysed, our findings are statistically sound and less prone to bias, than, for instance, ad–hoc surveys often employed by ranking bodies in order to attain similar outcomes. The approach proposed in our paper may contribute to enhance ranking methodologies, by reconciling the qualitative evaluation of academic prestige with its quantitative measurements via publication impact.  相似文献   

8.
Journal ranking is becoming more important in assessing the quality of academic research. Several indices have been suggested for this purpose, typically on the basis of a citation graph between the journals. We follow an axiomatic approach and find an impossibility theorem: any self-consistent ranking method, which satisfies a natural monotonicity property, should depend on the level of aggregation. Our result presents a trade-off between two axiomatic properties and reveals a dilemma of aggregation.  相似文献   

9.
Several studies have reported on metrics for measuring the influence of scientific topics from different perspectives; however, current ranking methods ignore the reinforcing effect of other academic entities on topic influence. In this paper, we developed an effective topic ranking model, 4EFRRank, by modeling the influence transfer mechanism among all academic entities in a complex academic network using a four-layer network design that incorporates the strengthening effect of multiple entities on topic influence. The PageRank algorithm is utilized to calculate the initial influence of topics, papers, authors, and journals in a homogeneous network, whereas the HITS algorithm is utilized to express the mutual reinforcement between topics, papers, authors, and journals in a heterogeneous network, iteratively calculating the final topic influence value. Based on a specific interdisciplinary domain, social media data, we applied the 4ERRank model to the 19,527 topics included in the criteria. The experimental results demonstrate that the 4ERRank model can successfully synthesize the performance of classic co-word metrics and effectively reflect high citation topics. This study enriches the methodology for assessing topic impact and contributes to the development of future topic-based retrieval and prediction tasks.  相似文献   

10.
In the past, recursive algorithms, such as PageRank originally conceived for the Web, have been successfully used to rank nodes in the citation networks of papers, authors, or journals. They have proved to determine prestige and not popularity, unlike citation counts. However, bibliographic networks, in contrast to the Web, have some specific features that enable the assigning of different weights to citations, thus adding more information to the process of finding prominence. For example, a citation between two authors may be weighed according to whether and when those two authors collaborated with each other, which is information that can be found in the co-authorship network. In this study, we define a couple of PageRank modifications that weigh citations between authors differently based on the information from the co-authorship graph. In addition, we put emphasis on the time of publications and citations. We test our algorithms on the Web of Science data of computer science journal articles and determine the most prominent computer scientists in the 10-year period of 1996–2005. Besides a correlation analysis, we also compare our rankings to the lists of ACM A. M. Turing Award and ACM SIGMOD E. F. Codd Innovations Award winners and find the new time-aware methods to outperform standard PageRank and its time-unaware weighted variants.  相似文献   

11.
The author order of multi-authored papers can reveal subtle patterns of scientific collaboration and provide insights on the nature of credit assignment among coauthors. This article proposes a sequence-based perspective on scientific collaboration. Using frequently occurring sequences as the unit of analysis, this study explores (1) what types of sequence patterns are most common in the scientific collaboration at the level of authors, institutions, U.S. states, and nations in Library and Information Science (LIS); and (2) the productivity (measured by number of papers) and influence (measured by citation counts) of different types of sequence patterns. Results show that (1) the productivity and influence approximately follow the power law for frequent sequences in the four levels of analysis; (2) the productivity and influence present a significant positive correlation among frequent sequences, and the strength of the correlation increases with the level of integration; (3) for author-level, institution-level, and state-level frequent sequences, short geographical distances between the authors usually co-present with high productivities, while long distances tend to co-occur with large citation counts; (4) for author-level frequent sequences, the pattern of “the more productive and prestigious authors ranking ahead” is the one with the highest productivity and the highest influence; however, in the rest of the levels of analysis, the pattern with the highest productivity and the highest influence is the one with “the less productive and prestigious institutions/states/nations ranking ahead.”  相似文献   

12.
基于网络结构挖掘算法的引文网络研究   总被引:1,自引:0,他引:1  
本文在对网络结构挖掘的两种典型算法(HITS算法和PageRank算法)进行比较分析的基础上,将PageRank算法应用到大规模引文网络中.对由236 517篇SCI文章构成的引文网络,计算得到每一篇文献的PageRank值,并深入分析了文献的PageRank值与通常使用的引文数指标之间的关系.分析表明:PageRank值具有与引文数很强的相关性和相似的幂律分布特征,但是PageRank算法能够在高引文文献中更好的区别文献的潜在重要性,并在很大程度上削弱作者自引对文献评价客观性的影响.  相似文献   

13.
Given the growing use of impact metrics in the evaluation of scholars, journals, academic institutions, and even countries, there is a critical need for means to compare scientific impact across disciplinary boundaries. Unfortunately, citation-based metrics are strongly biased by diverse field sizes and publication and citation practices. As a result, we have witnessed an explosion in the number of newly proposed metrics that claim to be “universal.” However, there is currently no way to objectively assess whether a normalized metric can actually compensate for disciplinary bias. We introduce a new method to assess the universality of any scholarly impact metric, and apply it to evaluate a number of established metrics. We also define a very simple new metric hs, which proves to be universal, thus allowing to compare the impact of scholars across scientific disciplines. These results move us closer to a formal methodology in the measure of scholarly impact.  相似文献   

14.
We investigate temporal factors in assessing the authoritativeness of web pages. We present three different metrics related to time: age, event, and trend. These metrics measure recentness, special event occurrence, and trend in revisions, respectively. An experimental dataset is created by crawling selected web pages for a period of several months. This data is used to compare page rankings by human users with rankings computed by the standard PageRank algorithm (which does not include temporal factors) and three algorithms that incorporate temporal factors, including the Time-Weighted PageRank (TWPR) algorithm introduced here. Analysis of the rankings shows that all three temporal-aware algorithms produce rankings more like those of human users than does the PageRank algorithm. Of these, the TWPR algorithm produces rankings most similar to human users’, indicating that all three temporal factors are relevant in page ranking. In addition, analysis of parameter values used to weight the three temporal factors reveals that age factor has the most impact on page rankings, while trend and event factors have the second and the least impact. Proper weighting of the three factors in TWPR algorithm provides the best ranking results.  相似文献   

15.
A standard procedure in citation analysis is that all papers published in one year are assessed at the same later point in time, implicitly treating all publications as if they were published at the exact same date. This leads to systematic bias in favor of early-months publications and against late-months publications. This contribution analyses the size of this distortion on a large body of publications from all disciplines over citation windows of up to 15 years. It is found that early-month publications enjoy a substantial citation advantage, which arises from citations received in the first three years after publication. While the advantage is stronger for author self-citations as opposed to citations from others, it cannot be eliminated by excluding self-citations. The bias decreases only slowly over longer citation windows due to the continuing influence of the earlier years’ citations. Because of the substantial extent and long persistence of the distortions, it would be useful to remove or control for this bias in research and evaluation studies which use citation data. It is demonstrated that this can be achieved by using the newly introduced concept of month-based citation windows.  相似文献   

16.
《Journal of Informetrics》2019,13(2):515-539
Counting of number of papers, of citations and the h-index are the simplest bibliometric indices of the impact of research. We discuss some improvements. First, we replace citations with individual citations, fractionally shared among co-authors, to take into account that different papers and different fields have largely different average number of co-authors and of references. Next, we improve on citation counting applying the PageRank algorithm to citations among papers. Being time-ordered, this reduces to a weighted counting of citation descendants that we call PaperRank. We compute a related AuthorRank applying the PageRank algorithm to citations among authors. These metrics quantify the impact of an author or paper taking into account the impact of those authors that cite it. Finally, we show how self- and circular-citations can be eliminated by defining a closed market of Citation-coins. We apply these metrics to the InSpire database that covers fundamental physics, presenting results for papers, authors, journals, institutes, towns, countries for all-time and in recent time periods.  相似文献   

17.
Biomedical research encompasses diverse types of activities, from basic science (“bench”) to clinical medicine (“bedside”) to bench-to-bedside translational research. It, however, remains unclear whether different types of research receive citations at varying rates. Here we aim to answer this question by using a newly proposed paper-level indicator that quantifies the extent to which a paper is basic science or clinical medicine. Applying this measure to 5 million biomedical papers, we find a systematic citation disadvantage of clinical oriented papers; they tend to garner far fewer citations and are less likely to be hit works than papers oriented towards basic science. At the same time, clinical research has a higher variance in its citation. We also find that the citation difference between basic and clinical research decreases, yet still persists, if longer citation-window is used. Given the increasing adoption of short-term, citation-based bibliometric indicators in funding decisions, the under-cited effect of clinical research may provide disincentives for bio-researchers to venture into the translation of basic scientific discoveries into clinical applications, thus providing explanations of reasons behind the existence of the gap between basic and clinical research that is commented as “valley of death” and the commentary of “extinction” risk of translational researchers. Our work may provide insights to policy-makers on how to evaluate different types of biomedical research.  相似文献   

18.
Most current machine learning methods for building search engines are based on the assumption that there is a target evaluation metric that evaluates the quality of the search engine with respect to an end user and the engine should be trained to optimize for that metric. Treating the target evaluation metric as a given, many different approaches (e.g. LambdaRank, SoftRank, RankingSVM, etc.) have been proposed to develop methods for optimizing for retrieval metrics. Target metrics used in optimization act as bottlenecks that summarize the training data and it is known that some evaluation metrics are more informative than others. In this paper, we consider the effect of the target evaluation metric on learning to rank. In particular, we question the current assumption that retrieval systems should be designed to directly optimize for a metric that is assumed to evaluate user satisfaction. We show that even if user satisfaction can be measured by a metric X, optimizing the engine on a training set for a more informative metric Y may result in a better test performance according to X (as compared to optimizing the engine directly for X on the training set). We analyze the situations as to when there is a significant difference in the two cases in terms of the amount of available training data and the number of dimensions of the feature space.  相似文献   

19.
We analyzed serial citations in 72 marine biology master's theses as well as the ranking data of aquatic science serials from six global serial citation metrics, to assess serial use by marine biology graduate students from two perspectives. From 1,035 unique serials, a core of 123 titles was identified and evaluated for access. Citation ages averaged 13.5 years, with 27% of serial citations dated prior to 1996. Students cited serials from disciplines beyond marine biology, demonstrating broad title dispersion compared to several other studies. Recommendations are offered for future investigation to develop a stronger understanding of graduate students’ use of the library's serial collection.  相似文献   

20.
利用Google的PageRank原理进行期刊引文分析,提出期刊在引文网络中的影响力测度指标--引文网络影响力序位(Journal Impact Rank in Citation Net,Impact Rank或IR).通过对118种生物学领域的期刊进行期刊引文网络影响力测度,并将IR结果与JCR提供的影响因子(Impact Factor, IF)值进行统计学分析以考察二者的相关性和差异性.结果表明,IR值与IF值的相关性较弱,其差异性具有统计学意义.分析其原因,IR考虑了引证期刊的权重和期刊间的相互影响,更适于反映期刊在其相关学科或领域的引文网络中的相对影响力;IF值因其实质上是期刊论文篇均被引频次,其计算不考虑期刊之间的相互联系和引证期刊的权威性,因而更适用于期刊自身的纵向评价;IR与IF从两种不同角度评价期刊影响力,可互为补充.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号