首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
In this paper, we develop a novel methodology within the IDCP measuring framework for comparing normalization procedures based on different classification systems of articles into scientific disciplines. Firstly, we discuss the properties of two rankings, based on a graphical and a numerical approach, for the comparison of any pair of normalization procedures using a single classification system for evaluation purposes. Secondly, when the normalization procedures are based on two different classification systems, we introduce two new rankings following the graphical and the numerical approaches. Each ranking is based on a double test that assesses the two normalization procedures in terms of the two classification systems on which they depend. Thirdly, we also compare the two normalization procedures using a third, independent classification system for evaluation purposes. In the empirical part of the paper we use: (i) a classification system consisting of 219 sub-fields identified with the Web of Science subject-categories; an aggregate classification system consisting of 19 broad fields, as well as a systematic and a random assignment of articles to sub-fields with the aim of maximizing or minimizing differences across sub-fields; (ii) four normalization procedures that use the field or sub-field mean citations of the above four classification systems as normalization factors; and (iii) a large dataset, indexed by Thomson Reuters, in which 4.4 million articles published in 1998–2003 with a five-year citation window are assigned to sub-fields using a fractional approach. The substantive results concerning the comparison of the four normalization procedures indicate that the methodology can be useful in practice.  相似文献   

2.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

3.
This paper investigates the citation impact of three large geographical areas – the U.S., the European Union (EU), and the rest of the world (RW) – at different aggregation levels. The difficulty is that 42% of the 3.6 million articles in our Thomson Scientific dataset are assigned to several sub-fields among a set of 219 Web of Science categories. We follow a multiplicative approach in which every article is wholly counted as many times as it appears at each aggregation level. We compute the crown indicator and the Mean Normalized Citation Score (MNCS) using for the first time sub-field normalization procedures for the multiplicative case. We also compute a third indicator that does not correct for differences in citation practices across sub-fields. It is found that: (1) No geographical area is systematically favored (or penalized) by any of the two normalized indicators. (2) According to the MNCS, only in six out of 80 disciplines – but in none of 20 fields – is the EU ahead of the U.S. In contrast, the normalized U.S./EU gap is greater than 20% in 44 disciplines, 13 fields, and for all sciences as a whole. The dominance of the EU over the RW is even greater. (3) The U.S. appears to devote relatively more – and the RW less – publication effort to sub-fields with a high mean citation rate, which explains why the U.S./EU and EU/RW gaps for all sciences as a whole increase by 4.5 and 5.6 percentage points in the un-normalized case. The results with a fractional approach are very similar indeed.  相似文献   

4.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

5.
We address the question how citation-based bibliometric indicators can best be normalized to ensure fair comparisons between publications from different scientific fields and different years. In a systematic large-scale empirical analysis, we compare a traditional normalization approach based on a field classification system with three source normalization approaches. We pay special attention to the selection of the publications included in the analysis. Publications in national scientific journals, popular scientific magazines, and trade magazines are not included. Unlike earlier studies, we use algorithmically constructed classification systems to evaluate the different normalization approaches. Our analysis shows that a source normalization approach based on the recently introduced idea of fractional citation counting does not perform well. Two other source normalization approaches generally outperform the classification-system-based normalization approach that we study. Our analysis therefore offers considerable support for the use of source-normalized bibliometric indicators.  相似文献   

6.
7.
We report characteristics of in-text citations in over five million full text articles from two large databases – the PubMed Central Open Access subset and Elsevier journals – as functions of time, textual progression, and scientific field. The purpose of this study is to understand the characteristics of in-text citations in a detailed way prior to pursuing other studies focused on answering more substantive research questions. As such, we have analyzed in-text citations in several ways and report many findings here. Perhaps most significantly, we find that there are large field-level differences that are reflected in position within the text, citation interval (or reference age), and citation counts of references. In general, the fields of Biomedical and Health Sciences, Life and Earth Sciences, and Physical Sciences and Engineering have similar reference distributions, although they vary in their specifics. The two remaining fields, Mathematics and Computer Science and Social Science and Humanities, have different reference distributions from the other three fields and between themselves. We also show that in all fields the numbers of sentences, references, and in-text mentions per article have increased over time, and that there are field-level and temporal differences in the numbers of in-text mentions per reference. A final finding is that references mentioned only once tend to be much more highly cited than those mentioned multiple times.  相似文献   

8.
The non-citation rate refers to the proportion of papers that do not attract any citation over a period of time following their publication. After reviewing all the related papers in Web of Science, Google Scholar and Scopus database, we find the current literature on citation distribution gives more focus on the distribution of the percentages and citations of papers receiving at least one citation, while there are fewer studies on the time-dependent patterns of the percentage of never-cited papers, on what distribution model can fit their time-dependent patterns, as well as on the factors influencing the non-citation rate. Here, we perform an empirical pilot analysis to the time-dependent distribution of the percentages of never-cited papers in a series of different, consecutive citation time windows following their publication in our selected six sample journals, and study the influence of paper length on the chance of papers’ getting cited. Through the above analysis, the following general conclusions are drawn: (1) a three-parameter negative exponential model can well fit time-dependent distribution curve of the percentages of never-cited papers; (2) in the initial citation time window, the percentage of never-cited papers in each journal is very high. However, as the citation time window becomes wider and wider, the percentage of never-cited papers begins to drop rapidly at first, and then drop more slowly, and the total degree of decline for most of journals is very large; (3) when applying the wider citation time windows, the percentage of never-cited papers for each journal begins to approach a stable value, and after that value, there will be very few changes in these stable percentages, unless we meet a large amount of “Sleeping Beauties” type papers; (4) the length of an paper has a great influence on whether it will be cited or not.  相似文献   

9.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of university rankings have been proposed to quantify it, yet the debate on what rankings are exactly measuring is enduring.To address the issue we have measured a quantitative and reliable proxy of the academic reputation of a given institution and compared our findings with well-established impact indicators and academic rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Given the volume of the data analysed, our findings are statistically sound and less prone to bias, than, for instance, ad–hoc surveys often employed by ranking bodies in order to attain similar outcomes. The approach proposed in our paper may contribute to enhance ranking methodologies, by reconciling the qualitative evaluation of academic prestige with its quantitative measurements via publication impact.  相似文献   

10.
面向引用关系的引文内容标注框架研究   总被引:3,自引:0,他引:3  
引文内容分析能够帮助揭示文献引用关系的深层语义内涵。本文梳理了目前已有的引文内容标注体系,归纳出构建引文分类体系的三个主要维度,即引文功能,引文重要性,情感倾向。以支持文献引用关系分析为目标,针对引文内容分析设计出一个引文内容标注框架,其中包括揭示引文关系抽象性质的引文分类标注体系,描述被引文献具体内容的引用对象标注体系,以及记录引文客观特征的引文属性标注体系。具体的标注实验体现了该标注框架的可用性。图1。表6。参考文献56。  相似文献   

11.
Studies on the relationship between the numbers of citations and downloads of scientific publications is beneficial for understanding the mechanism of citation patterns and research evaluation. However, seldom studies have considered directionality issues between downloads and citations or adopted a case-by-case time lag length between the download and citation time series of each individual publication. In this paper, we introduce the Granger-causal inference strategy to study the directionality between downloads and citations and set up the length of time lag between the time series for each case. By researching the publications on the Lancet, we find that publications have various directionality patterns, but highly cited publications tend to feature greater possibilities to have Granger causality. We apply a step-by-step manner to introduce the Granger-causal inference method to information science as four steps, namely conducting stationarity tests, determining time lag between time series, establishing cointegration test, and implementing Granger-causality inference. We hope that this method can be applied by future information scientists in their own research contexts.  相似文献   

12.
13.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

14.
From the way that it was initially defined (Hirsch, 2005), the h-index naturally encourages focus on the most highly cited publications of an author and this in turn has led to (predominantly) a rank-based approach to its investigation. However, Hirsch (2005) and Burrell (2007a) both adopted a frequency-based approach leading to general conjectures regarding the relationship between the h-index and the author's publication and citation rates as well as his/her career length. Here we apply the distributional results of Burrell, 2007a, Burrell, 2013b to three published data sets to show that a good estimate of the h-index can often be obtained knowing only the number of publications and the number of citations. (Exceptions can occur when an author has one or more “outliers” in the upper tail of the citation distribution.) In other words, maybe the main body of the distribution determines the h-index, not the wild wagging of the tail. Furthermore, the simple geometric distribution turns out to be the key.  相似文献   

15.
The scientific impact of a publication can be determined not only based on the number of times it is cited but also based on the citation speed with which its content is noted by the scientific community. Here we present the citation speed index as a meaningful complement to the h index: whereas for the calculation of the h index the impact of publications is based on number of citations, for the calculation of the speed index it is the number of months that have elapsed since the first citation, the citation speed with which the results of publications find reception in the scientific community. The speed index is defined as follows: a group of papers has the index s if for s of its Np papers the first citation was at least s months ago, and for the other (Np ? s) papers the first citation was ≤s months ago.  相似文献   

16.
Bibliometrics has become an indispensable tool in the evaluation of institutions (in the natural and life sciences). An evaluation report without bibliometric data has become a rarity. However, evaluations are often required to measure the citation impact of publications in very recent years in particular. As a citation analysis is only meaningful for publications for which a citation window of at least three years is guaranteed, very recent years cannot (should not) be included in the analysis. This study presents various options for dealing with this problem in statistical analysis. The publications from two universities from 2000 to 2011 are used as a sample dataset (n = 2652, univ 1 = 1484 and univ 2 = 1168). One option is to show the citation impact data (percentiles) in a graphic and to use a line for percentiles regressed on ‘distant’ publication years (with confidence interval) showing the trend for the ‘very recent’ publication years. Another way of dealing with the problem is to work with the concept of samples and populations. The third option (very related to the second) is the application of the counterfactual concept of causality.  相似文献   

17.
A citation is a well-established mechanism for connecting scientific artifacts. Citation networks are used by citation analysis for a variety of reasons, prominently to give credit to scientists’ work. However, because of current citation practices, scientists tend to cite only publications, leaving out other types of artifacts such as datasets. Datasets then do not get appropriate credit even though they are increasingly reused and experimented with. We develop a network flow measure, called DataRank, aimed at solving this gap. DataRank assigns a relative value to each node in the network based on how citations flow through the graph, differentiating publication and dataset flow rates. We evaluate the quality of DataRank by estimating its accuracy at predicting the usage of real datasets: web visits to GenBank and downloads of Figshare datasets. We show that DataRank is better at predicting this usage compared to alternatives while offering additional interpretable outcomes. We discuss improvements to citation behavior and algorithms to properly track and assign credit to datasets.  相似文献   

18.
To explore the citation evolution of papers published in the same year but different month, we selected papers from a discipline (physical geography), a subject (diabetes: endocrine and metabolism) and a journal (Journal of Biological Chemistry) published in 2005 as research objects. These papers were divided into six groups according to the difference in publication month, and we analyzed citations to these papers for the 9 years after publication. The results showed that within 5 years after papers from physical geography were published, the overall differences in citations of papers in different groups were statistically significant (P < 0.05); after that, the differences were not statistically significant. Within 5 years after papers from diabetes (endocrine and metabolism) were published, the overall differences in citations of papers in different groups were statistically significant (P < 0.05); thereafter, the differences were not statistically significant. Within 7 years after papers from the Journal of Biological Chemistry were published, the overall differences in citations of papers in different groups were statistically significant (P < 0.05); thereafter, the differences were not statistically significant. Citations of papers followed the same pattern irrespective of discipline, subject or journal: citations of papers published in the same year but different month were obviously different in the first few publishing years, but as time went on, only the difference in publication month in a calendar year did not affect the papers' longer‐term citation.  相似文献   

19.
This study provides a conceptual overview of the literature dealing with the process of citing documents (focusing on the literature from the recent decade). It presents theories, which have been proposed for explaining the citation process, and studies having empirically analyzed this process. The overview is referred to as conceptual, because it is structured based on core elements in the citation process: the context of the cited document, processes from selection to citation of documents, and the context of the citing document. The core elements are presented in a schematic representation. The overview can be used to find answers on basic questions about the practice of citing documents. Besides understanding of the process of citing, it delivers basic information for the proper application of citations in research evaluation.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号