首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we attempt to assess the impact of journals in the field of forestry, in terms of bibliometric data, by providing an evaluation of forestry journals based on data envelopment analysis (DEA). In addition, based on the results of the conducted analysis, we provide suggestions for improving the impact of the journals in terms of widely accepted measures of journal citation impact, such as the journal impact factor (IF) and the journal h-index. More specifically, by modifying certain inputs associated with the productivity of forestry journals, we have illustrated how this method could be utilized to raise their efficiency, which in terms of research impact can then be translated into an increase of their bibliometric indices, such as the h-index, IF or eigenfactor score.  相似文献   

2.
In this paper, we propose two methods for scoring scientific output based on statistical quantile plotting. First, a rescaling of journal impact factors for scoring scientific output on a macro level is proposed. It is based on normal quantile plotting which allows to transform impact data over several subject categories to a standardized distribution. This can be used in comparing scientific output of larger entities such as departments working in quite different areas of research. Next, as an alternative to the Hirsch index [Hirsch, J.E. (2005). An index to quantify an individuals scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572], the extreme value index is proposed as an indicator for assessment of the research performance of individual scientists. In case of Lotkaian–Zipf–Pareto behaviour of citation counts of an individual, the extreme value index can be interpreted as the slope in a Pareto–Zipf quantile plot. This index, in contrast to the Hirsch index, is not influenced by the number of publications but stresses the decay of the statistical tail of citation counts. It appears to be much less sensitive to the science field than the Hirsch index.  相似文献   

3.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

4.
5.
The Hirsch index is a number that synthesizes a researcher's output. It is the maximum number h such that the researcher has h papers with at least h citations each. Woeginger [Woeginger, G. J. (2008a). An axiomatic characterization of the Hirsch-index. Mathematical Social Sciences, 56(2), 224–232; Woeginger, G. J. (2008b). A symmetry axiom for scientific impact indices. Journal of Informetrics, 2(3), 298–303] characterizes the Hirsch index when indices are assumed to be integer-valued. In this note, the Hirsch index is characterized, when indices are allowed to be real-valued, by adding to Woeginger's monotonicity two axioms in a way related to the concept of monotonicity.  相似文献   

6.
Hirsch's h-index seeks to give a single number that in some sense summarizes an author's research output and its impact. Essentially, the h-index seeks to identify the most productive core of an author's output in terms of most received citations. This most productive set we refer to as the Hirsch core, or h-core. Jin's A-index relates to the average impact, as measured by the average number of citations, of this “most productive” core. In this paper, we investigate both the total productivity of the Hirsch core – what we term the size of the h-core – and the A-index using a previously proposed stochastic model for the publication/citation process, emphasising the importance of the dynamic, or time-dependent, nature of these measures. We also look at the inter-relationships between these measures. Numerical investigations suggest that the A-index is a linear function of time and of the h-index, while the size of the Hirsch core has an approximate square-law relationship with time, and hence also with the A-index and the h-index.  相似文献   

7.
Hirsch [Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572] has proposed the h index as a single-number criterion to evaluate the scientific output of a researcher. We investigated the convergent validity of decisions for awarding long-term fellowships to post-doctoral researchers as practiced by the Boehringer Ingelheim Fonds (B.I.F.) by using the h index. Our study examined 414 B.I.F. applicants (64 approved and 350 rejected) with a total of 1586 papers. The results of our study show that the applicants’ h indices correlate substantially with standard bibliometric indicators. Even though the h indices of approved B.I.F. applicants on average (arithmetic mean and median) are higher than those of rejected applicants (and with this, fundamentally confirm the validity of the funding decisions), the distributions of the h indices show in part overlaps that we categorized as type I error (falsely drawn approval) or type II error (falsely drawn rejection). Approximately, one-third of the decisions to award a fellowship to an applicant show a type I error, and about one-third of the decisions not to award a fellowship to an applicant show a type II error. Our analyses of possible reasons for these errors show that the applicant's field of study but not personal ties between the B.I.F. applicant and the B.I.F. can increase or decrease the risks for type I and type II errors.  相似文献   

8.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

9.
Influence and capital are two concepts used to evaluate scholarly outputs, and these can be measured using the Scholarly Capital Model as a modelling tool. The tool looks at the concepts of connectedness, venue representation, and ideational influence using centrality measures within a social network. This research used co‐authorships and h‐indices to investigate authors who have published papers in the field of information behaviour between 1980 and 2015 as extracted from Web of Science. The findings show a relationship between the authors’ connectedness and the venue (journal) representation. It could be seen that the venue (journal) influences the chance of citation, and equally, the prestige (centrality) of authors probably raises the citations of the journals. The research also shows a significant positive relationship between the venue representation and ideational influence. This means that a research work that is published in a highly cited journal will find more visibility and will receive more citations.  相似文献   

10.
This article reviews the debate within bibliometrics regarding the h-index. Despite its popularity as a decision-making tool within higher education, the h-index has become increasingly controversial among specialists. Fundamental questions remain regarding the extent to which the h-index actually measures what it sets out to measure. Unfortunately, many aspects of this debate are confined to highly technical discussions in specialised journals. This article explains in simple terms exactly why a growing number of bibliometricians are sceptical that the h-index is a useful tool for evaluating researchers. It concludes that librarians should be cautious in their recommendations regarding this metric, at least until better evidence becomes available.  相似文献   

11.
约稿技巧和程序优化   总被引:2,自引:0,他引:2  
董燕萍 《编辑学报》2015,27(3):264-265
优质稿件是医学期刊质量的保证.自由投稿上门与期刊的发展已难协调,主动约稿是获得优质稿件、提高期刊影响力的有效办法.文章总结《国际肝胆胰疾病杂志》(英文版)多年约稿经验,为编辑同行约稿提供一些思路和启迪.  相似文献   

12.
佟建国  颜帅  陈浩元 《编辑学报》2013,25(3):208-210
高校自然科学学报是中国科技期刊中的特殊群体。我们用统计数据展示了该类期刊的良好声誉。基于期刊引证数据和网站下载数据,与专业科技期刊作了比较,认为该类期刊的学术质量与全国科技期刊相当。呼吁依据该类期刊的运行规律,推进期刊改革,促进其健康发展。  相似文献   

13.
We propose two new indices that are able to measure a scientific researcher's overall influence and the level of his/her works’ association with the mainstream research subjects within a scientific field. These two new measures – the total influence index and the mainstream index – differ from traditional performance measures such as the simple citation count and the h-index in that they take into account the indirect influence of an author's work. Indirect influence describes a scientific publication's impact upon subsequent works that do not reference it directly. The two measures capture indirect influence information from the knowledge emanating paths embedded in the citation network of a target scientific field. We take the Hirsch index, data envelopment analysis, and lithium iron phosphate battery technology field to examine the characteristics of these two measures. The results show that the total influence index favors earlier researchers and successfully highlights those researchers who have made crucial contributions to the target scientific field. The mainstream index, in addition to underlining total influence, also spotlights active researchers who enter into a scientific field in a later development stage. In summary, these two new measures are valuable complements to traditional scientific performance measures.  相似文献   

14.
In the present paper the Percentage Rank Position (PRP) index concluding from the principle of Similar Distribution of Information Impact in different fields of science (Vinkler, 2013), is suggested to assess journals in different research fields comparatively. The publications in the journals dedicated to a field are ranked by citation frequency, and the PRP-index of the papers in the elite set of the field is calculated. The PRP-index relates the citation rank number of the paper to the total number of papers in the corresponding set. The sum of the PRP-index of the elite papers in a journal, PRP(j,F) may represent the eminence of the journal in the field. The non-parametric and non-dimensional PRP(j,F) index of journals is believed to be comparable across fields.  相似文献   

15.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

16.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

17.
We axiomatize the well-known Hirsch index (h-index), which evaluates researcher productivity and impact on a field, and formalize a new axiom called head-independence. Under head-independence, a decrease, to some extent, in the number of citations of “frequently cited papers” has no effect on the index. Together with symmetry and axiom D, head-independence uniquely characterizes the h-index on a certain domain of indices. Some relationships between our axiomatization and those in the literature are also investigated.  相似文献   

18.
h指数及其用于学术期刊评价   总被引:29,自引:0,他引:29  
由J.E.Hirsch提出的h指数被认为是一个评价科学工作者科学成就的好指标,也能很好地用于学术期刊的评价并可与期刊影响因子优势互补。作为实例,计算了《中华医学杂志》的h指数,强调指出了各种因素对h指数数值的影响。  相似文献   

19.
The Hirsch index h and the g index proposed by Egghe as well as the f index and the t index proposed by Tol are shown to be special cases of a family of Hirsch index variants, based on the generalized mean with exponent p. Inequalities between the different indices are derived from the generalized mean inequality. The graphical determination of the indices is shown for one example.  相似文献   

20.
From the way that it was initially defined (Hirsch, 2005), the h-index naturally encourages focus on the most highly cited publications of an author and this in turn has led to (predominantly) a rank-based approach to its investigation. However, Hirsch (2005) and Burrell (2007a) both adopted a frequency-based approach leading to general conjectures regarding the relationship between the h-index and the author's publication and citation rates as well as his/her career length. Here we apply the distributional results of Burrell, 2007a, Burrell, 2013b to three published data sets to show that a good estimate of the h-index can often be obtained knowing only the number of publications and the number of citations. (Exceptions can occur when an author has one or more “outliers” in the upper tail of the citation distribution.) In other words, maybe the main body of the distribution determines the h-index, not the wild wagging of the tail. Furthermore, the simple geometric distribution turns out to be the key.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号