首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 84 毫秒
1.
This article provides the first comparison of citation counts and mentoring impact (MPACT) indicators — indicators that serve to quantify the process of doctoral mentoring. Using a dataset of 120 library and information science (LIS) faculty members in North America, this article examines the correlation between MPACT indicators and citation counts. Results suggest that MPACT indicators measure something distinct from citation counts. The article discusses these distinctions, with emphasis on differences between faculty ranks. It considers possible explanations for weak correlations between citations and mentoring at the full professor rank as well as implications for faculty activity analysis and broader institutional evolution.  相似文献   

2.
For comparisons of citation impacts across fields and over time, bibliometricians normalize the observed citation counts with reference to an expected citation value. Percentile-based approaches have been proposed as a non-parametric alternative to parametric central-tendency statistics. Percentiles are based on an ordered set of citation counts in a reference set, whereby the fraction of papers at or below the citation counts of a focal paper is used as an indicator for its relative citation impact in the set. In this study, we pursue two related objectives: (1) although different percentile-based approaches have been developed, an approach is hitherto missing that satisfies a number of criteria such as scaling of the percentile ranks from zero (all other papers perform better) to 100 (all other papers perform worse), and solving the problem with tied citation ranks unambiguously. We introduce a new citation-rank approach having these properties, namely P100; (2) we compare the reliability of P100 empirically with other percentile-based approaches, such as the approaches developed by the SCImago group, the Centre for Science and Technology Studies (CWTS), and Thomson Reuters (InCites), using all papers published in 1980 in Thomson Reuters Web of Science (WoS). How accurately can the different approaches predict the long-term citation impact in 2010 (in year 31) using citation impact measured in previous time windows (years 1–30)? The comparison of the approaches shows that the method used by InCites overestimates citation impact (because of using the highest percentile rank when papers are assigned to more than a single subject category) whereas the SCImago indicator shows higher power in predicting the long-term citation impact on the basis of citation rates in early years. Since the results show a disadvantage in this predictive ability for P100 against the other approaches, there is still room for further improvements.  相似文献   

3.
Citation numbers are extensively used for assessing the quality of scientific research. The use of raw citation counts is generally misleading, especially when applied to cross-disciplinary comparisons, since the average number of citations received is strongly dependent on the scientific discipline of reference of the paper. Measuring and eliminating biases in citation patterns is crucial for a fair use of citation numbers. Several numerical indicators have been introduced with this aim, but so far a specific statistical test for estimating the fairness of these numerical indicators has not been developed. Here we present a statistical method aimed at estimating the effectiveness of numerical indicators in the suppression of citation biases. The method is simple to implement and can be easily generalized to various scenarios. As a practical example we test, in a controlled case, the fairness of fractional citation count, which has been recently proposed as a tool for cross-discipline comparison. We show that this indicator is not able to remove biases in citation patterns and performs much worse than the rescaling of citation counts with average values.  相似文献   

4.
Over the past decade, national research evaluation exercises, traditionally conducted using the peer review method, have begun opening to bibliometric indicators. The citations received by a publication are assumed as proxy for its quality, but they require standardization prior to use in comparative evaluation of organizations or individual scientists: the citation data must be standardized, due to the varying citation behavior across research fields. The objective of this paper is to compare the effectiveness of the different methods of normalizing citations, in order to provide useful indications to research assessment practitioners. Simulating a typical national research assessment exercise, he analysis is conducted for all subject categories in the hard sciences and is based on the Thomson Reuters Science Citation Index-Expanded®. Comparisons show that the citations average is the most effective scaling parameter, when the average is based only on the publications actually cited.  相似文献   

5.
引用认同—— 一个值得注意的概念   总被引:9,自引:1,他引:8  
针对被引次数在科研影响力评估中的局限性,介绍一种不同于引文分析的引用分析方法--引用认同分析,分析引用认同的组成和特征,探讨引用认同与学术风格、引用深度的联系,然后以已故情报学家王崇德教授为例作实证分析,分析两种不同的引用认同,讨论引用认同的集中-离散分布规律等。呼唤国内情报学界更多地开展引用认同的研究与应用。  相似文献   

6.
In recent decades, the United States Patent and Trademark Office (USPTO) has been granting more and more patents with more and more references, which has led to patent citation inflation. Citation counts are a fundamental consideration in decisions about research funding, academic promotions, commercializing IP, investing in technologies, etc. With so much at stake, we must be sure we are valuing citations at their true worth. In this article, we reveal two types of patent citation inflation and analyze its causes and cumulative effects. Further, we propose some alternative indicators that more accurately reflect the true worth of a citation. A case study on the patents held by eight universities demonstrates that the relative indicators outlined in this paper are an effective way to account for citation inflation as an alternative approach to evaluating patent activity.  相似文献   

7.
概述专利引用分析方法和相关指标,并简单介绍引用分析工具。在此基础上结合情报研究实践,以分子设计育种领域为例,从国家、公司以及技术三个层面进行专利引用分析实证。在实证分析中,用专利引用时间间隔来揭示技术创新和技术吸收的速度;用基于自引和他引和组合分析来揭示机构的相对竞争态势;用交叉引用指数和共同引用指数来分析机构间的技术交叠程度;用专利引用树来揭示技术发展脉络。  相似文献   

8.
In the recent debate on the use of averages of ratios (AoR) and ratios of averages (RoA) for the compilation of field-normalized citation rates, little evidence has been provided on the different results obtained by the two methods at various levels of aggregation. This paper provides such an empirical analysis at the level of individual researchers, departments, institutions and countries. Two datasets are used: 147,547 papers published between 2000 and 2008 and assigned to 14,379 Canadian university professors affiliated to 508 departments, and all papers indexed in the Web of Science for the same period (N = 8,221,926) assigned to all countries and institutions. Although there is a strong relationship between the two measures at each of these levels, a pairwise comparison of AoR and RoA shows that the differences between all the distributions are statistically significant and, thus, that the two methods are not equivalent and do not give the same results. Moreover, the difference between both measures is strongly influenced by the number of papers published as well as by their impact scores: the difference between AoR and RoA is greater for departments, institutions and countries with low RoA scores. Finally, our results show that RoA relative impact indicators do not add up to unity (as they should by definition) at the level of the reference dataset, whereas the AoR does have that property.  相似文献   

9.
In this paper we present a first large-scale analysis of the relationship between Mendeley readership and citation counts with particular documents’ bibliographic characteristics. A data set of 1.3 million publications from different fields published in journals covered by the Web of Science (WoS) has been analyzed. This work reveals that document types that are often excluded from citation analysis due to their lower citation values, like editorial materials, letters, news items, or meeting abstracts, are strongly covered and saved in Mendeley, suggesting that Mendeley readership can reliably inform the analysis of these document types. Findings show that collaborative papers are frequently saved in Mendeley, which is similar to what is observed for citations. The relationship between readership and the length of titles and number of pages, however, is weaker than for the same relationship observed for citations. The analysis of different disciplines also points to different patterns in the relationship between several document characteristics, readership, and citation counts. Overall, results highlight that although disciplinary differences exist, readership counts are related to similar bibliographic characteristics as those related to citation counts, reinforcing the idea that Mendeley readership and citations capture a similar concept of impact, although they cannot be considered as equivalent indicators.  相似文献   

10.
11.
We evaluate author impact indicators and ranking algorithms on two publication databases using large test data sets of well-established researchers. The test data consists of (1) ACM fellowship and (2) various life-time achievement awards. We also evaluate different approaches of dividing credit of papers among co-authors and analyse the impact of self-citations. Furthermore, we evaluate different graph normalisation approaches for when PageRank is computed on author citation graphs.We find that PageRank outperforms citation counts in identifying well-established researchers. This holds true when PageRank is computed on author citation graphs but also when PageRank is computed on paper graphs and paper scores are divided among co-authors. In general, the best results are obtained when co-authors receive an equal share of a paper's score, independent of which impact indicator is used to compute paper scores. The results also show that removing author self-citations improves the results of most ranking metrics. Lastly, we find that it is more important to personalise the PageRank algorithm appropriately on the paper level than deciding whether to include or exclude self-citations. However, on the author level, we find that author graph normalisation is more important than personalisation.  相似文献   

12.
Altmetrics from Altmetric.com are widely used by publishers and researchers to give earlier evidence of attention than citation counts. This article assesses whether Altmetric.com scores are reliable early indicators of likely future impact and whether they may also reflect non-scholarly impacts. A preliminary factor analysis suggests that the main altmetric indicator of scholarly impact is Mendeley reader counts, with weaker news, informational and social network discussion/promotion dimensions in some fields. Based on a regression analysis of Altmetric.com data from November 2015 and Scopus citation counts from October 2017 for articles in 30 narrow fields, only Mendeley reader counts are consistent predictors of future citation impact. Most other Altmetric.com scores can help predict future impact in some fields. Overall, the results confirm that early Altmetric.com scores can predict later citation counts, although less well than journal impact factors, and the optimal strategy is to consider both Altmetric.com scores and journal impact factors. Altmetric.com scores can also reflect dimensions of non-scholarly impact in some fields.  相似文献   

13.
The normalized citation indicator may not be sufficiently reliable when a short citation time window is used, because the citation counts for recently published papers are not as reliable as those for papers published many years ago. In a limited time period, recent publications usually have insufficient time to accumulate citations and the citation counts of these publications are not sufficiently reliable to be used in the citation impact indicators. However, normalization methods themselves cannot solve this problem. To solve this problem, we introduce a weighting factor to the commonly used normalization indicator Category Normalized Citation Impact (CNCI) at the paper level. The weighting factor, which is calculated as the correlation coefficient between citation counts of papers in the given short citation window and those in the fixed long citation window, reflects the degree of reliability of the CNCI value of one paper. To verify the effect of the proposed weighted CNCI indicator, we compared the CNCI score and CNCI ranking of 500 universities before and after introducing the weighting factor. The results showed that although there was a strong positive correlation before and after the introduction of the weighting factor, some universities’ performance and rankings changed dramatically.  相似文献   

14.
A recently suggested modification of the g-index is analysed in order to take multiple coauthorship appropriately into account. By fractionalised counting of the papers one can obtain an appropriate measure which I call gm-index. Two fictitious examples for model cases and two empirical cases are analysed. The results are compared with two other variants of the g-index which have also recently been proposed. Only the gm-index shows the correct behaviour when datasets are aggregated. The interpolated and continuous versions of the g-index and its variants are also discussed. For an intuitive comparison of the determination of the investigated variants of the h-index and the g-index, a visualization of the citation records is utilized.  相似文献   

15.
National culture is among those societal factors which could influence research and innovation activities. In this study, we investigated the associations of two national culture models with citation impact of nations (measured by the proportion of papers belonging to the 10 % and 1 % most cited papers in the corresponding fields, PPtop10% and PPtop 1%). Bivariate statistical analyses showed that of six Hofstede’s national culture dimensions (HNCD), uncertainty avoidance and power distance had a statistically significant negative association, while individualism and indulgence had a statistically significant positive association with both citation impact indicators (PPtop10% and PPtop1%). The study also revealed that of two Inglehart-Welzel cultural values (IWCV), the value survival versus self-expression is statistically significantly related to both citation impact indicators (PPtop10% and PPtop 1%). We additionally calculated multiple regression analyses controlling for the possible effects of confounding factors including national self-citations, international co-authorships, investments in research and development, international migrant stock, number of researchers of each nation, language, and productivity. The results revealed that the statistically significant associations of HNCD with citation impact indicators disappeared. But the statistically significant relationship between survivals versus self-expression values and both citation impact indicators remained stable even after controlling for the confounding variables. Thus, the freedom of expression and trust in society might contribute to better scholarly communication systems, higher level of international collaborations, and further quality research.  相似文献   

16.
In an age of intensifying scientific collaboration, the counting of papers by multiple authors has become an important methodological issue in scientometric based research evaluation. Especially, how counting methods influence institutional level research evaluation has not been studied in existing literatures. In this study, we selected the top 300 universities in physics in the 2011 HEEACT Ranking as our study subjects. We compared the university rankings generated from four different counting methods (i.e. whole counting, straight counting using first author, straight counting using corresponding author, and fractional counting) to show how paper counts and citation counts and the subsequent university ranks were affected by counting method selection. The counting was based on the 1988–2008 physics papers records indexed in ISI WoS. We also observed how paper and citation counts were inflated by whole counting. The results show that counting methods affected the universities in the middle range more than those in the upper or lower ranges. Citation counts were also more affected than paper counts. The correlation between the rankings generated from whole counting and those from the other methods were low or negative in the middle ranges. Based on the findings, this study concluded that straight counting and fractional counting were better choices for paper count and citation count in the institutional level research evaluation.  相似文献   

17.
The findings of Bornmann, Leydesdorff, and Wang (2013b) revealed that the consideration of journal impact improves the prediction of long-term citation impact. This paper further explores the possibility of improving citation impact measurements on the base of a short citation window by the consideration of journal impact and other variables, such as the number of authors, the number of cited references, and the number of pages. The dataset contains 475,391 journal papers published in 1980 and indexed in Web of Science (WoS, Thomson Reuters), and all annual citation counts (from 1980 to 2010) for these papers. As an indicator of citation impact, we used percentiles of citations calculated using the approach of Hazen (1914). Our results show that citation impact measurement can really be improved: If factors generally influencing citation impact are considered in the statistical analysis, the explained variance in the long-term citation impact can be much increased. However, this increase is only visible when using the years shortly after publication but not when using later years.  相似文献   

18.
Scholarly citations – widely seen as tangible measures of the impact and significance of academic papers – guide critical decisions by research administrators and policy makers. The citation distributions form characteristic patterns that can be revealed by big-data analysis. However, the citation dynamics varies significantly among subject areas, countries etc. The problem is how to quantify those differences, separate global and local citation characteristics. Here, we carry out an extensive analysis of the power-law relationship between the total citation count and the h-index to detect a functional dependence among its parameters for different science domains. The results demonstrate that the statistical structure of the citation indicators admits representation by a global scale and a set of local exponents. The scale parameters are evaluated for different research actors – individual researchers and entire countries – employing subject- and affiliation-based divisions of science into domains. The results can inform research assessment and classification into subject areas; the proposed divide-and-conquer approach can be applied to hidden scales in other power-law systems.  相似文献   

19.
Microsoft Academic is a free academic search engine and citation index that is similar to Google Scholar but can be automatically queried. Its data is potentially useful for bibliometric analysis if it is possible to search effectively for individual journal articles. This article compares different methods to find journal articles in its index by searching for a combination of title, authors, publication year and journal name and uses the results for the widest published correlation analysis of Microsoft Academic citation counts for journal articles so far. Based on 126,312 articles from 323 Scopus subfields in 2012, the optimal strategy to find articles with DOIs is to search for them by title and filter out those with incorrect DOIs. This finds 90% of journal articles. For articles without DOIs, the optimal strategy is to search for them by title and then filter out matches with dissimilar metadata. This finds 89% of journal articles, with an additional 1% incorrect matches. The remaining articles seem to be mainly not indexed by Microsoft Academic or indexed with a different language version of their title. From the matches, Scopus citation counts and Microsoft Academic counts have an average Spearman correlation of 0.95, with the lowest for any single field being 0.63. Thus, Microsoft Academic citation counts are almost universally equivalent to Scopus citation counts for articles that are not recent but there are national biases in the results.  相似文献   

20.
We evaluate article-level metrics along two dimensions. Firstly, we analyse metrics’ ranking bias in terms of fields and time. Secondly, we evaluate their performance based on test data that consists of (1) papers that have won high-impact awards and (2) papers that have won prizes for outstanding quality. We consider different citation impact indicators and indirect ranking algorithms in combination with various normalisation approaches (mean-based, percentile-based, co-citation-based, and post hoc rescaling). We execute all experiments on two publication databases which use different field categorisation schemes (author-chosen concept categories and categories based on papers’ semantic information).In terms of bias, we find that citation counts are always less time biased but always more field biased compared to PageRank. Furthermore, rescaling paper scores by a constant number of similarly aged papers reduces time bias more effectively compared to normalising by calendar years. We also find that percentile citation scores are less field and time biased than mean-normalised citation counts.In terms of performance, we find that time-normalised metrics identify high-impact papers better shortly after their publication compared to their non-normalised variants. However, after 7 to 10 years, the non-normalised metrics perform better. A similar trend exists for the set of high-quality papers where these performance cross-over points occur after 5 to 10 years.Lastly, we also find that personalising PageRank with papers’ citation counts reduces time bias but increases field bias. Similarly, using papers’ associated journal impact factors to personalise PageRank increases its field bias. In terms of performance, PageRank should always be personalised with papers’ citation counts and time-rescaled for citation windows smaller than 7 to 10 years.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号