首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
We evaluate author impact indicators and ranking algorithms on two publication databases using large test data sets of well-established researchers. The test data consists of (1) ACM fellowship and (2) various life-time achievement awards. We also evaluate different approaches of dividing credit of papers among co-authors and analyse the impact of self-citations. Furthermore, we evaluate different graph normalisation approaches for when PageRank is computed on author citation graphs.We find that PageRank outperforms citation counts in identifying well-established researchers. This holds true when PageRank is computed on author citation graphs but also when PageRank is computed on paper graphs and paper scores are divided among co-authors. In general, the best results are obtained when co-authors receive an equal share of a paper's score, independent of which impact indicator is used to compute paper scores. The results also show that removing author self-citations improves the results of most ranking metrics. Lastly, we find that it is more important to personalise the PageRank algorithm appropriately on the paper level than deciding whether to include or exclude self-citations. However, on the author level, we find that author graph normalisation is more important than personalisation.  相似文献   

2.
In the past, recursive algorithms, such as PageRank originally conceived for the Web, have been successfully used to rank nodes in the citation networks of papers, authors, or journals. They have proved to determine prestige and not popularity, unlike citation counts. However, bibliographic networks, in contrast to the Web, have some specific features that enable the assigning of different weights to citations, thus adding more information to the process of finding prominence. For example, a citation between two authors may be weighed according to whether and when those two authors collaborated with each other, which is information that can be found in the co-authorship network. In this study, we define a couple of PageRank modifications that weigh citations between authors differently based on the information from the co-authorship graph. In addition, we put emphasis on the time of publications and citations. We test our algorithms on the Web of Science data of computer science journal articles and determine the most prominent computer scientists in the 10-year period of 1996–2005. Besides a correlation analysis, we also compare our rankings to the lists of ACM A. M. Turing Award and ACM SIGMOD E. F. Codd Innovations Award winners and find the new time-aware methods to outperform standard PageRank and its time-unaware weighted variants.  相似文献   

3.
We analyse the difference between the averaged (average of ratios) and globalised (ratio of averages) author-level aggregation approaches based on various paper-level metrics. We evaluate the aggregation variants in terms of (1) their field bias on the author-level and (2) their ranking performance based on test data that comprises researchers that have received fellowship status or won prestigious awards for their long-lasting and high-impact research contributions to their fields. We consider various direct and indirect paper-level metrics with different normalisation approaches (mean-based, percentile-based, co-citation-based) and focus on the bias and performance differences between the two aggregation variants of each metric. We execute all experiments on two publication databases which use different field categorisation schemes. The first uses author-chosen concept categories and covers the computer science literature. The second covers all disciplines and categorises papers by keywords based on their contents. In terms of bias, we find relatively little difference between the averaged and globalised variants. For mean-normalised citation counts we find no significant difference between the two approaches. However, the percentile-based metric shows less bias with the globalised approach, except for citation windows smaller than four years. On the multi-disciplinary database, PageRank has the overall least bias but shows no significant difference between the two aggregation variants. The averaged variants of most metrics have less bias for small citation windows. For larger citation windows the differences are smaller and are mostly insignificant.In terms of ranking the well-established researchers who have received accolades for their high-impact contributions, we find that the globalised variant of the percentile-based metric performs better. Again we find no significant differences between the globalised and averaged variants based on citation counts and PageRank scores.  相似文献   

4.
A standard procedure in citation analysis is that all papers published in one year are assessed at the same later point in time, implicitly treating all publications as if they were published at the exact same date. This leads to systematic bias in favor of early-months publications and against late-months publications. This contribution analyses the size of this distortion on a large body of publications from all disciplines over citation windows of up to 15 years. It is found that early-month publications enjoy a substantial citation advantage, which arises from citations received in the first three years after publication. While the advantage is stronger for author self-citations as opposed to citations from others, it cannot be eliminated by excluding self-citations. The bias decreases only slowly over longer citation windows due to the continuing influence of the earlier years’ citations. Because of the substantial extent and long persistence of the distortions, it would be useful to remove or control for this bias in research and evaluation studies which use citation data. It is demonstrated that this can be achieved by using the newly introduced concept of month-based citation windows.  相似文献   

5.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of university rankings have been proposed to quantify it, yet the debate on what rankings are exactly measuring is enduring.To address the issue we have measured a quantitative and reliable proxy of the academic reputation of a given institution and compared our findings with well-established impact indicators and academic rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Given the volume of the data analysed, our findings are statistically sound and less prone to bias, than, for instance, ad–hoc surveys often employed by ranking bodies in order to attain similar outcomes. The approach proposed in our paper may contribute to enhance ranking methodologies, by reconciling the qualitative evaluation of academic prestige with its quantitative measurements via publication impact.  相似文献   

6.
We evaluate article-level metrics along two dimensions. Firstly, we analyse metrics’ ranking bias in terms of fields and time. Secondly, we evaluate their performance based on test data that consists of (1) papers that have won high-impact awards and (2) papers that have won prizes for outstanding quality. We consider different citation impact indicators and indirect ranking algorithms in combination with various normalisation approaches (mean-based, percentile-based, co-citation-based, and post hoc rescaling). We execute all experiments on two publication databases which use different field categorisation schemes (author-chosen concept categories and categories based on papers’ semantic information).In terms of bias, we find that citation counts are always less time biased but always more field biased compared to PageRank. Furthermore, rescaling paper scores by a constant number of similarly aged papers reduces time bias more effectively compared to normalising by calendar years. We also find that percentile citation scores are less field and time biased than mean-normalised citation counts.In terms of performance, we find that time-normalised metrics identify high-impact papers better shortly after their publication compared to their non-normalised variants. However, after 7 to 10 years, the non-normalised metrics perform better. A similar trend exists for the set of high-quality papers where these performance cross-over points occur after 5 to 10 years.Lastly, we also find that personalising PageRank with papers’ citation counts reduces time bias but increases field bias. Similarly, using papers’ associated journal impact factors to personalise PageRank increases its field bias. In terms of performance, PageRank should always be personalised with papers’ citation counts and time-rescaled for citation windows smaller than 7 to 10 years.  相似文献   

7.
Despite the increasing use of citation-based metrics for research evaluation purposes, we do not know yet which metrics best deliver on their promise to gauge the significance of a scientific paper or a patent. We assess 17 network-based metrics by their ability to identify milestone papers and patents in three large citation datasets. We find that traditional information-retrieval evaluation metrics are strongly affected by the interplay between the age distribution of the milestone items and age biases of the evaluated metrics. Outcomes of these metrics are therefore not representative of the metrics’ ranking ability. We argue in favor of a modified evaluation procedure that explicitly penalizes biased metrics and allows us to reveal metrics’ performance patterns that are consistent across the datasets. PageRank and LeaderRank turn out to be the best-performing ranking metrics when their age bias is suppressed by a simple transformation of the scores that they produce, whereas other popular metrics, including citation count, HITS and Collective Influence, produce significantly worse ranking results.  相似文献   

8.
运用共词分析的方法,检索CNKI数据库中的链接分析领域论文,确定高频关键词,用Bicomb建立关键词共词矩阵,以SPSS为工具进行因子分析和聚类分析,探讨国内链接分析的研究现状与研究热点,发现应用于链接分析的方法主要有引文分析、共链分析、可视化、社会网络分析等,链接分析算法主要包括PageRank算法、HIST算法、网页排序等,应用研究集中于网络信息资源评价、网站的网络影响力评价和大学评价.  相似文献   

9.
基于网络结构挖掘算法的引文网络研究   总被引:1,自引:0,他引:1  
本文在对网络结构挖掘的两种典型算法(HITS算法和PageRank算法)进行比较分析的基础上,将PageRank算法应用到大规模引文网络中.对由236 517篇SCI文章构成的引文网络,计算得到每一篇文献的PageRank值,并深入分析了文献的PageRank值与通常使用的引文数指标之间的关系.分析表明:PageRank值具有与引文数很强的相关性和相似的幂律分布特征,但是PageRank算法能够在高引文文献中更好的区别文献的潜在重要性,并在很大程度上削弱作者自引对文献评价客观性的影响.  相似文献   

10.
中心度指标在期刊引文网络分析中的运用及改进   总被引:2,自引:1,他引:1  
利用ISI的期刊引证报告(JCR)社会科学版,收集国际图书情报学56种期刊2005-2007年的引证数据,尝试研究社会网络中心度指标在期刊引文网络分析中的运用。发现原来的中心度指标忽略了期刊刊载论文的数量对于期刊总被引频次的影响,针对原有指标的不足,给出平均中心度这一改进型中心度指标,并运用探索性因子分析探究各中心度指标间的相互关系。  相似文献   

11.
Questionable publications have been accused of “greedy” practices; however, their influence on academia has not been gauged. Here, we probe the impact of questionable publications through a systematic and comprehensive analysis with various participants from academia and compare the results with those of their unaccused counterparts using billions of citation records, including liaisons, i.e., journals and publishers, and prosumers, i.e., authors. Questionable publications attribute publisher-level self-citations to their journals while limiting journal-level self-citations; yet, conventional journal-level metrics are unable to detect these publisher-level self-citations. We propose a hybrid journal-publisher metric for detecting self-favouring citations among QJs from publishers. Additionally, we demonstrate that the questionable publications were less disruptive and influential than their counterparts. Our findings indicate an inflated citation impact of suspicious academic publishers. The findings provide a basis for actionable policy-making against questionable publications.  相似文献   

12.
Constructing academic networks to explore intellectual structure realize academic community detection, which can promote scientific research innovation and discipline progress, constitutes an important research topic. In this study, tripartite citation is fused with co-citation and coupling relations as a way of weighting the strength of direct citations, and all-author tripartite citation networks were constructed due to the contributions of all authors to the resulting publications. For purpose of exploring the potential of the all-author exclusive and inclusive tripartite citation networks, gene editing is taken as a case study. The extensive experimental comparisons are conducted with the traditional author single-citation networks and first-author tripartite citation network in terms of network structure characteristics, identifying core scholars, and exploring intellectual structures. The following conclusions can be drawn as follows: our all-author tripartite citation networks are able to help identify the most influential scholars in the field of gene editing, and the intellectual structures from exclusive tripartite citation networks are optimal.  相似文献   

13.
As the volume of scientific articles has grown rapidly over the last decades, evaluating their impact becomes critical for tracing valuable and significant research output. Many studies have proposed various ranking methods to estimate the prestige of academic papers using bibliometric methods. However, the weight of the links in bibliometric networks has been rarely considered for article ranking in existing literature. Such incomplete investigation in bibliometric methods could lead to biased ranking results. Therefore, a novel scientific article ranking algorithm, W-Rank, is introduced in this study proposing a weighting scheme. The scheme assigns weight to the links of citation network and authorship network by measuring citation relevance and author contribution. Combining the weighted bibliometric networks and a propagation algorithm, W-Rank is able to obtain article ranking results that are more reasonable than existing PageRank-based methods. Experiments are conducted on both arXiv hep-th and Microsoft Academic Graph datasets to verify the W-Rank and compare it with three renowned article ranking algorithms. Experimental results prove that the proposed weighting scheme assists the W-Rank in obtaining ranking results of higher accuracy and, in certain perspectives, outperforming the other algorithms.  相似文献   

14.
Collaboration in science is a process in which two or more authors share their ideas, resources and data to create a joint work. This research compares coauthorship networks of Iranian articles in library and information science (LIS), psychology (PSY), management (MNG), and economics (ECO) in the ISI Web of Knowledge database during 2000–2009, and uses network analysis for the visualization of coauthorship networks. Data include all articles with at least one Iranian author and indexed in ISI's Social Science Citation Index (SSCI) for the fields of LIS, PSY, MNG, and ECO. Indicators such as the Collaborative Index (CI), Degree of Collaboration (DC) and Collaboration Coefficient (CC) were calculated for each discipline. Results show that two or three authors were the most common number of authors per paper, and authors of PSY tended to have more multi-authored articles, compared to the other disciplines. LIS had the lowest rank regarding CC. MNG had the densest coauthorship network, and PSY had the sparsest. Iranian authors in the field of PSY mostly collaborated with those in the U.S., while LIS and MNG authors tended to collaborate with U.K. authors, and ECO authors tended to collaborate with Canadians.  相似文献   

15.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

16.
自然科学期刊自引对影响因子的"调控"   总被引:14,自引:0,他引:14  
李运景  侯汉清 《情报学报》2006,25(2):172-178
本文利用《中国科技期刊引证报告》,重新计算了其中几个学科的一些期刊除去自引后的影响因子,并对去除前和去除后的影响因子与期刊排名进行了对比,以考察期刊自引对影响因子和期刊排名的影响。调查发现目前个别期刊过度自引已经使期刊排名发生了失真。最后对如何遏制这种现象提出了一些建议。  相似文献   

17.
Several studies have reported on metrics for measuring the influence of scientific topics from different perspectives; however, current ranking methods ignore the reinforcing effect of other academic entities on topic influence. In this paper, we developed an effective topic ranking model, 4EFRRank, by modeling the influence transfer mechanism among all academic entities in a complex academic network using a four-layer network design that incorporates the strengthening effect of multiple entities on topic influence. The PageRank algorithm is utilized to calculate the initial influence of topics, papers, authors, and journals in a homogeneous network, whereas the HITS algorithm is utilized to express the mutual reinforcement between topics, papers, authors, and journals in a heterogeneous network, iteratively calculating the final topic influence value. Based on a specific interdisciplinary domain, social media data, we applied the 4ERRank model to the 19,527 topics included in the criteria. The experimental results demonstrate that the 4ERRank model can successfully synthesize the performance of classic co-word metrics and effectively reflect high citation topics. This study enriches the methodology for assessing topic impact and contributes to the development of future topic-based retrieval and prediction tasks.  相似文献   

18.
简要介绍荷兰著名科学计量学家亨克·莫德的专著《科研评价中的引文分析》。该书主要分为8个主要单元,探讨基础科学研究部门和科技期刊评价,ISI引文索引,社会科学与人文科学评价,准确性,引文分析与同行评议等问题。书中的结论为我们有效利用引文分析方法提供理论指导和实践借鉴。  相似文献   

19.
In an age of intensifying scientific collaboration, the counting of papers by multiple authors has become an important methodological issue in scientometric based research evaluation. Especially, how counting methods influence institutional level research evaluation has not been studied in existing literatures. In this study, we selected the top 300 universities in physics in the 2011 HEEACT Ranking as our study subjects. We compared the university rankings generated from four different counting methods (i.e. whole counting, straight counting using first author, straight counting using corresponding author, and fractional counting) to show how paper counts and citation counts and the subsequent university ranks were affected by counting method selection. The counting was based on the 1988–2008 physics papers records indexed in ISI WoS. We also observed how paper and citation counts were inflated by whole counting. The results show that counting methods affected the universities in the middle range more than those in the upper or lower ranges. Citation counts were also more affected than paper counts. The correlation between the rankings generated from whole counting and those from the other methods were low or negative in the middle ranges. Based on the findings, this study concluded that straight counting and fractional counting were better choices for paper count and citation count in the institutional level research evaluation.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号