首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We study the correlation between citation-based and expert-based assessments of journals and series, which we collectively refer to as sources. The source normalized impact per paper (SNIP), the Scimago Journal Rank 2 (SJR2) and the raw impact per paper (RIP) indicators are used to assess sources based on their citations, while the Norwegian model is used to obtain expert-based source assessments. We first analyze – within different subject area categories and across such categories – the degree to which RIP, SNIP and SJR2 values correlate with the quality levels in the Norwegian model. We find that sources at higher quality levels on average have substantially higher RIP, SNIP, and SJR2 values. Regarding subject area categories, SNIP seems to perform substantially better than SJR2 from the field normalization point of view. We then compare the ability of RIP, SNIP and SJR2 to predict whether a source is classified at the highest quality level in the Norwegian model or not. SNIP and SJR2 turn out to give more accurate predictions than RIP, which provides evidence that normalizing for differences in citation practices between scientific fields indeed improves the accuracy of citation indicators.  相似文献   

2.
引文评价新指标SNIP旨在评价不同主题领域期刊影响力。从理论上对比分析SNIP与IF、h指数、SJR指标值的原理、关系,各自的优缺点以及它们的应用区别。结果表明,理论上SNIP与其他3个指标存在关联性,具有一定的优势,可用于期刊评价实践中。  相似文献   

3.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

4.
针对Z指数不能实现跨学科领域期刊评价的缺陷,文章通过引入学科规范化引文影响力修正不同学科的引用差异,改进Z指数并提出ZCNCI指数。通过分析ZCNCI指数与Z指数、P指数、SNIP、标准化特征因子、影响因子百分位等指标的相关性和差异性,验证ZCNCI指数跨学科期刊评价的效力。结果表明,ZCNCI指数延续了Z指数综合反映期刊数量、质量和被引分布特征的优势,与Z指数、P指数、标准化特征因子的相关性较高,且克服了SNIP、影响因子百分位的评价缺陷,在跨学科期刊评价中的综合表现较好。ZCNCI指数具有跨学科期刊评价效力,可用于跨学科期刊评价。  相似文献   

5.
This paper investigates the citation impact of three large geographical areas – the U.S., the European Union (EU), and the rest of the world (RW) – at different aggregation levels. The difficulty is that 42% of the 3.6 million articles in our Thomson Scientific dataset are assigned to several sub-fields among a set of 219 Web of Science categories. We follow a multiplicative approach in which every article is wholly counted as many times as it appears at each aggregation level. We compute the crown indicator and the Mean Normalized Citation Score (MNCS) using for the first time sub-field normalization procedures for the multiplicative case. We also compute a third indicator that does not correct for differences in citation practices across sub-fields. It is found that: (1) No geographical area is systematically favored (or penalized) by any of the two normalized indicators. (2) According to the MNCS, only in six out of 80 disciplines – but in none of 20 fields – is the EU ahead of the U.S. In contrast, the normalized U.S./EU gap is greater than 20% in 44 disciplines, 13 fields, and for all sciences as a whole. The dominance of the EU over the RW is even greater. (3) The U.S. appears to devote relatively more – and the RW less – publication effort to sub-fields with a high mean citation rate, which explains why the U.S./EU and EU/RW gaps for all sciences as a whole increase by 4.5 and 5.6 percentage points in the un-normalized case. The results with a fractional approach are very similar indeed.  相似文献   

6.
We address the question how citation-based bibliometric indicators can best be normalized to ensure fair comparisons between publications from different scientific fields and different years. In a systematic large-scale empirical analysis, we compare a traditional normalization approach based on a field classification system with three source normalization approaches. We pay special attention to the selection of the publications included in the analysis. Publications in national scientific journals, popular scientific magazines, and trade magazines are not included. Unlike earlier studies, we use algorithmically constructed classification systems to evaluate the different normalization approaches. Our analysis shows that a source normalization approach based on the recently introduced idea of fractional citation counting does not perform well. Two other source normalization approaches generally outperform the classification-system-based normalization approach that we study. Our analysis therefore offers considerable support for the use of source-normalized bibliometric indicators.  相似文献   

7.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

8.
The use of citation indexes, such as the impact factor of the Journal Citation Reports, the Scopus SJR (SCImago Journal Rank) and the SNIP (Source Normalized Impact per Paper) indicators, as well as the impact factor of the Russian Scientific Citation Index, is investigated in order to qualitatively assess the content of scientific information resources that are available at the Central Science Library of the National Academy of Sciences of Belarus.  相似文献   

9.
In this paper we deal with the problem of aggregating numeric sequences of arbitrary length that represent e.g. citation records of scientists. Impact functions are the aggregation operators that express as a single number not only the quality of individual publications, but also their author's productivity.We examine some fundamental properties of these aggregation tools. It turns out that each impact function which always gives indisputable valuations must necessarily be trivial. Moreover, it is shown that for any set of citation records in which none is dominated by the other, we may construct an impact function that gives any a priori-established authors’ ordering. Theoretically then, there is considerable room for manipulation in the hands of decision makers.We also discuss the differences between the impact function-based and the multicriteria decision making-based approach to scientific quality management, and study how the introduction of new properties of impact functions affects the assessment process. We argue that simple mathematical tools like the h- or g-index (as well as other bibliometric impact indices) may not necessarily be a good choice when it comes to assess scientific achievements.  相似文献   

10.
For comparisons of citation impacts across fields and over time, bibliometricians normalize the observed citation counts with reference to an expected citation value. Percentile-based approaches have been proposed as a non-parametric alternative to parametric central-tendency statistics. Percentiles are based on an ordered set of citation counts in a reference set, whereby the fraction of papers at or below the citation counts of a focal paper is used as an indicator for its relative citation impact in the set. In this study, we pursue two related objectives: (1) although different percentile-based approaches have been developed, an approach is hitherto missing that satisfies a number of criteria such as scaling of the percentile ranks from zero (all other papers perform better) to 100 (all other papers perform worse), and solving the problem with tied citation ranks unambiguously. We introduce a new citation-rank approach having these properties, namely P100; (2) we compare the reliability of P100 empirically with other percentile-based approaches, such as the approaches developed by the SCImago group, the Centre for Science and Technology Studies (CWTS), and Thomson Reuters (InCites), using all papers published in 1980 in Thomson Reuters Web of Science (WoS). How accurately can the different approaches predict the long-term citation impact in 2010 (in year 31) using citation impact measured in previous time windows (years 1–30)? The comparison of the approaches shows that the method used by InCites overestimates citation impact (because of using the highest percentile rank when papers are assigned to more than a single subject category) whereas the SCImago indicator shows higher power in predicting the long-term citation impact on the basis of citation rates in early years. Since the results show a disadvantage in this predictive ability for P100 against the other approaches, there is still room for further improvements.  相似文献   

11.
The purpose of the Kazakh publication citation indicator that has been developed in Kazakhstan since 2005 is to carry out scientometric analysis of scientific publications to determine their citation rate. At present, the bibliographic database (BDB) on citation includes information on the publication activities and citation index of approximately 30000 Kazakh scientists and specialists. They had over 18000 scientific papers published in over 500 domestic and foreign journals. The total quantity of references to papers by Kazakh scientists was more than 28000. The Kazakh analogue of the science citation index determination system is an efficient tool for analytical work with the BDB of scientific publications, which makes it possible to calculate publication activities and citation parameters, which are used to define the value and demand for the results of scientific work in various fields of domestic science.  相似文献   

12.
Altmetrics from Altmetric.com are widely used by publishers and researchers to give earlier evidence of attention than citation counts. This article assesses whether Altmetric.com scores are reliable early indicators of likely future impact and whether they may also reflect non-scholarly impacts. A preliminary factor analysis suggests that the main altmetric indicator of scholarly impact is Mendeley reader counts, with weaker news, informational and social network discussion/promotion dimensions in some fields. Based on a regression analysis of Altmetric.com data from November 2015 and Scopus citation counts from October 2017 for articles in 30 narrow fields, only Mendeley reader counts are consistent predictors of future citation impact. Most other Altmetric.com scores can help predict future impact in some fields. Overall, the results confirm that early Altmetric.com scores can predict later citation counts, although less well than journal impact factors, and the optimal strategy is to consider both Altmetric.com scores and journal impact factors. Altmetric.com scores can also reflect dimensions of non-scholarly impact in some fields.  相似文献   

13.
面向学术影响力评价的科技文献引用与下载的相关性研究   总被引:1,自引:0,他引:1  
源于对同一主体学术影响力评价的认识,新型的下载频次指标与当前主流的被引频次指标应具有内在的统一性,但用户认知水平和行为方式的不同以及引用和下载的不同决策模式及其在文献利用行为中的不同地位决定了两者的差异。基于CNKI期刊库,从论文、期刊和机构三个层次考察两种指标的分布状态,分析两者之关联乃至引用与下载的行为特点,研究下载频次所呈现的特征并与被引频次比较,以考察其作为评价指标的合理性和可用性。  相似文献   

14.
Bibliometricians have long recurred to citation counts to measure the impact of publications on the advancement of science. However, since the earliest days of the field, some scholars have questioned whether all citations should be worth the same, and have gone on to weight them by a variety of factors. However sophisticated the operationalization of the measures, the methodologies used in weighting citations still present limits in their underlying assumptions. This work takes an alternative approach to resolving the underlying problem: the proposal is to value citations by the impact of the citing articles, regardless of the length of their reference list. As well as conceptualizing a new indicator of impact, the work illustrates its application to the 2004–2012 Italian scientific production indexed in the WoS. The proposed impact indicator is highly correlated to the traditional citation count, however the shifts observed between the two measures are frequent and the number of outliers not negligible. Moreover, the new indicator shows greater “sensitivity” when used to identify the highly-cited papers.  相似文献   

15.
The normalized citation indicator may not be sufficiently reliable when a short citation time window is used, because the citation counts for recently published papers are not as reliable as those for papers published many years ago. In a limited time period, recent publications usually have insufficient time to accumulate citations and the citation counts of these publications are not sufficiently reliable to be used in the citation impact indicators. However, normalization methods themselves cannot solve this problem. To solve this problem, we introduce a weighting factor to the commonly used normalization indicator Category Normalized Citation Impact (CNCI) at the paper level. The weighting factor, which is calculated as the correlation coefficient between citation counts of papers in the given short citation window and those in the fixed long citation window, reflects the degree of reliability of the CNCI value of one paper. To verify the effect of the proposed weighted CNCI indicator, we compared the CNCI score and CNCI ranking of 500 universities before and after introducing the weighting factor. The results showed that although there was a strong positive correlation before and after the introduction of the weighting factor, some universities’ performance and rankings changed dramatically.  相似文献   

16.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

17.
The standard impact factor allows one to compare scientific journals only within particular scientific subjects. To overcome this limitation, another indicator of citation, viz., the thematically weighted impact factor (TWIF), is proposed. This indicator allows one to compare journals of various subjects and takes the fact that a journal belongs to several subjects into account. Information on the thematic headings of a journal and the value of a standard impact factor is necessary for calculation of the indicator. The TWIF, which is calculated according to the citation index of Journal Citation Reports, is investigated in this article.  相似文献   

18.
A new size-independent indicator of scientific journal prestige, the SJR2 indicator, is proposed. This indicator takes into account not only the prestige of the citing scientific journal but also its closeness to the cited journal using the cosine of the angle between the vectors of the two journals’ cocitation profiles. To eliminate the size effect, the accumulated prestige is divided by the fraction of the journal's citable documents, thus eliminating the decreasing tendency of this type of indicator and giving meaning to the scores. Its method of computation is described, and the results of its implementation on the Scopus 2008 dataset is compared with those of an ad hoc Journal Impact Factor, JIF(3y), and SNIP, the comparison being made both overall and within specific scientific areas. All three, the SJR2 indicator, the SNIP indicator and the JIF distributions, were found to fit well to a logarithmic law. Although the three metrics were strongly correlated, there were major changes in rank. In addition, the SJR2 was distributed more equalized than the JIF by Subject Area and almost as equalized as the SNIP, and better than both at the lower level of Specific Subject Areas. The incorporation of the cosine increased the values of the flows of prestige between thematically close journals.  相似文献   

19.
Main path analysis is a popular method for extracting the backbone of scientific evolution from a (paper) citation network. The first and core step of main path analysis, called search path counting, is to weight citation arcs by the number of scientific influence paths from old to new papers. Search path counting shows high potential in scientific impact evaluation due to its semantic similarity to the meaning of scientific impact indicator, i.e. how many papers are influenced to what extent. In addition, the algorithmic idea of search path counting also resembles many known indirect citation impact indicators. Inspired by the above observations, this paper presents the FSPC (Forward Search Path Count) framework as an alternative scientific impact indicator based on indirect citations. Two critical assumptions are made to ensure the effectiveness of FSPC. First, knowledge decay is introduced to weight scientific influence paths in decreasing order of length. Second, path capping is introduced to mimic human literature search and citing behavior. By experiments on two well-studied datasets against two carefully created gold standard sets of papers, we have demonstrated that FSPC is able to achieve surprisingly good performance in not only recognizing high-impact papers but also identifying undercited papers.  相似文献   

20.
Articles are cited for different purposes and differentiating between reasons when counting citations may therefore give finer-grained citation count information. Although identifying and aggregating the individual reasons for each citation may be impractical, recording the number of citations that originate from different article sections might illuminate the general reasons behind a citation count (e.g., 110 citations = 10 Introduction citations + 100 Methods citations). To help investigate whether this could be a practical and universal solution, this article compares 19 million citations with DOIs from six different standard sections in 799,055 PubMed Central open access articles across 21 out of 22 fields. There are apparently non-systematic differences between fields in the most citing sections and the extent to which citations from one section overlap with citations from another, with some degree of overlap in most cases. Thus, at a science-wide level, section headings are partly unreliable indicators of citation context, even if they are more standard within individual fields. They may still be used within fields to help identify individual highly cited articles that have had one type of impact, especially methodological (Methods) or context setting (Introduction), but expert judgement is needed to validate the results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号