首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
It is well-known that the distribution of citations to articles in a journal is skewed. We ask whether journal rankings based on the impact factor are robust with respect to this fact. We exclude the most cited paper, the top 5 and 10 cited papers for 100 economics journals and recalculate the impact factor. Afterwards we compare the resulting rankings with the original ones from 2012. Our results show that the rankings are relatively robust. This holds both for the 2-year and the 5-year impact factor.  相似文献   

3.
自然科学期刊自引对影响因子的"调控"   总被引:14,自引:0,他引:14  
李运景  侯汉清 《情报学报》2006,25(2):172-178
本文利用《中国科技期刊引证报告》,重新计算了其中几个学科的一些期刊除去自引后的影响因子,并对去除前和去除后的影响因子与期刊排名进行了对比,以考察期刊自引对影响因子和期刊排名的影响。调查发现目前个别期刊过度自引已经使期刊排名发生了失真。最后对如何遏制这种现象提出了一些建议。  相似文献   

4.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

5.
[目的/意义] 探索用不同的方法对学术期刊进行评价,以获得更加有效、公平和有价值的信息,促进期刊评价理论与实践的发展。[方法/过程] 将经济学中的自由处置壳(FDH)效率评价模型引入到文献计量学中,对传统FDH模型进行调整,构建一种新的期刊评价方法,选取7个指标对图书情报类的41种期刊进行评价,并将评价结果与其他期刊排名进行相关性分析。[结果/结论] 基于FDH模型的期刊评价方法的评价结果更客观,与其他期刊排名显著相关;该方法将期刊评价指标融合在一起,不仅可以测算出每一个被评期刊的得分,还能够提供更多的有价值的信息,为期刊部门提供决策参考。例如,FDH模型可以识别出被评期刊的"标杆"控制期刊"等,期刊通过与其各指标表现接近的期刊进行对比、分析,发现自身存在的不足和差距,达到持续改进乃至超越的目的。  相似文献   

6.
Citation averages, and Impact Factors (IFs) in particular, are sensitive to sample size. Here, we apply the Central Limit Theorem to IFs to understand their scale-dependent behavior. For a journal of n randomly selected papers from a population of all papers, we expect from the Theorem that its IF fluctuates around the population average μ, and spans a range of values proportional to σ/n, where σ2 is the variance of the population's citation distribution. The 1/n dependence has profound implications for IF rankings: The larger a journal, the narrower the range around μ where its IF lies. IF rankings therefore allocate an unfair advantage to smaller journals in the high IF ranks, and to larger journals in the low IF ranks. As a result, we expect a scale-dependent stratification of journals in IF rankings, whereby small journals occupy the top, middle, and bottom ranks; mid-sized journals occupy the middle ranks; and very large journals have IFs that asymptotically approach μ. We obtain qualitative and quantitative confirmation of these predictions by analyzing (i) the complete set of 166,498 IF & journal-size data pairs in the 1997–2016 Journal Citation Reports of Clarivate Analytics, (ii) the top-cited portion of 276,000 physics papers published in 2014–2015, and (iii) the citation distributions of an arbitrarily sampled list of physics journals. We conclude that the Central Limit Theorem is a good predictor of the IF range of actual journals, while sustained deviations from its predictions are a mark of true, non-random, citation impact. IF rankings are thus misleading unless one compares like-sized journals or adjusts for these effects. We propose the Φ index, a rescaled IF that accounts for size effects, and which can be readily generalized to account also for different citation practices across research fields. Our methodology applies to other citation averages that are used to compare research fields, university departments or countries in various types of rankings.  相似文献   

7.
We use data on economic, management and political science journals to produce quantitative estimates of (in)consistency of the evaluations based on six popular bibliometric indicators (impact factor, 5-year impact factor, immediacy index, article influence score, SNIP and SJR). We advocate a new approach to the aggregation of journal rankings. Since the rank aggregation is a multicriteria decision problem, ranking methods from social choice theory may solve it. We apply either a direct ranking method based on the majority rule (the Copeland rule, the Markovian method) or a sorting procedure based on a tournament solution, such as the uncovered set and the minimal externally stable set. We demonstrate that the aggregate rankings reduce the number of contradictions and represent the set of the single-indicator-based rankings better than any of the six rankings themselves.  相似文献   

8.
This paper presents five key questions that should be considered by researchers and librarians who develop or use survey-based (stated preference) journal rankings. Many of the distinctions among the various rankings—their attributes, strengths, and weaknesses—are captured in the responses to these five questions: What construct is being measured? How are differences in the construct expressed and recorded? Who are the respondents? Which journals are included in the rankings? How is respondents' familiarity with the journals taken into account? The paper also summarizes the problems that may require attention when survey-based rankings are used.  相似文献   

9.
This study uses citation data and survey data for 55 library and information science journals to identify three factors underlying a set of 11 journal ranking metrics (six citation metrics and five stated preference metrics). The three factors—three composite rankings—represent (1) the citation impact of a typical article, (2) subjective reputation, and (3) the citation impact of the journal as a whole (all articles combined). Together, they account for 77% of the common variance within the set of 11 metrics. Older journals (those founded before 1953) and nonprofit journals tend to have high reputation scores relative to their citation impact. Unlike previous research, this investigation shows no clear evidence of a distinction between the journals of greatest importance to scholars and those of greatest importance to practitioners. Neither group's subjective journal rankings are closely related to citation impact.  相似文献   

10.
Despite the recent moving towards open access trend in the academic community, both restricted and open access journals continue to coexist. New journals need to choose their journal types (open versus restricted access), while the incumbent journals may change their journal types. To better understand how the academic community is shaped by journals’ choices of journal types, we constructed a game-theoretical model of journal competition with endogenous journal qualities and journal types. We found that journals’ equilibrium quality and types vary by article processing charge (APC) and journals’ preference for quality. Compared to the case that both journals are open access, a competition among journals of different types leads to higher journal quality standards chosen in equilibrium when APC is modest. Therefore, in the academic community where the research quality is measured by the highest quality of the journals therein, journals of different types guarantee a good degree of knowledge diffusion with a high quality.  相似文献   

11.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

12.
Scientific journals are ordered by their impact factor while countries, institutions or researchers can be ranked by their scientific production, impact or by other simple or composite indicators as in the case of university rankings. In this paper, the theoretical framework proposed in Criado, R., Garcia, E., Pedroche, F. & Romance, M. (2013). A new method for comparing rankings through complex networks: Model and analysis of competitiveness of major European soccer leagues. Chaos, 23, 043114 for football competitions is used as a starting point to define a general index describing the dynamics or its opposite, stability, of rankings. Some characteristics to study rankings, ranking dynamics measures and axioms for such indices are presented. Furthermore, the notion of volatility of elements in rankings is introduced. Our study includes rankings with ties, entrants and leavers. Finally, some worked out examples are shown.  相似文献   

13.
This paper analyzes several well-known bibliometric indices using an axiomatic approach. We concentrate on indices aiming at capturing the global impact of a scientific output and do not investigate indices aiming at capturing an average impact. Hence, the indices that we study are designed to evaluate authors or groups of authors but not journals. The bibliometric indices that are studied include classic ones such as the number of highly cited papers as well as more recent ones such as the h-index and the g-index. We give conditions that characterize these indices, up to the multiplication by a positive constant. We also study the bibliometric rankings that are induced by these indices. Hence, we provide a general framework for the comparison of bibliometric rankings and indices.  相似文献   

14.
OBJECTIVE: To quantify the impact of Pakistani Medical Journals using the principles of citation analysis. METHODS: References of articles published in 2006 in three selected Pakistani medical journals were collected and examined. The number of citations for each Pakistani medical journal was totalled. The first ranking of journals was based on the total number of citations; second ranking was based on impact factor 2006 and third ranking was based on the 5-year impact factor. Self-citations were excluded in all the three ratings. RESULTS: A total of 9079 citations in 567 articles were examined. Forty-nine separate Pakistani medical journals were cited. The Journal of the Pakistan Medical Association remains on the top in all three rankings, while Journal of College of Physicians and Surgeons-Pakistan attains second position in the ranking based on the total number of citations. The Pakistan Journal of Medical Sciences moves to second position in the ranking based on the impact factor 2006. The Journal of Ayub Medical College, Abbottabad moves to second position in the ranking based on the 5-year impact factor. CONCLUSION: This study examined the citation pattern of Pakistani medical journals. The impact factor, despite its limitations, is a valid indicator of quality for journals.  相似文献   

15.
This paper provides a ranking of 69 marketing journals using a new Hirsch-type index, the hg-index which is the geometric mean of hg. The applicability of this index is tested on data retrieved from Google Scholar on marketing journal articles published between 2003 and 2007. The authors investigate the relationship between the hg-ranking, ranking implied by Thomson Reuters’ Journal Impact Factor for 2008, and rankings in previous citation-based studies of marketing journals. They also test two models of consumption of marketing journals that take into account measures of citing (based on the hg-index), prestige, and reading preference.  相似文献   

16.
The problem of how to rank academic journals in the communication field (human interaction, mass communication, speech, and rhetoric) is one of practical importance to scholars, university administrators, and librarians, yet there is no methodology that covers the field's journals comprehensively and objectively. This article reports a new ranking methodology based in empirical criteria. The new system relates independent measures of the prestige of the field's doctoral departments to information about where faculty members from those departments have published scholarly articles. This new approach identifies the field's most influential journals as those that more frequently publish the work of the field's top scholars and programs as perceived by their peers. This system was used to compute prestige weights (P-weights) for 65 communication journals. P-weights were found to be strongly correlated with ISI Web of Science journal impact factor scores and can be used to identify an overall prestige hierarchy for communication journals as well as prestige rankings by subject specialty.  相似文献   

17.
Predatory publishing has become a much‐discussed and highly visible phenomenon over the past few years. One widespread, but hardly tested, assumption is the idea that articles published in predatory journals deviate substantially from those published in traditional journals. In this paper, we address this assumption by utilizing corpus linguistic tools. We compare the ‘academic‐like’ nature of articles from two different journals in political science, one top‐ranking and one alleged predatory. Our findings indicate that there is significant linguistic variation between the two corpora along the dimensions that we test. The articles display notable differences in the types and usage of keywords in the two journals. We conclude that articles published in so‐called predatory journals do not conform to linguistic norms used in higher‐quality journals. These findings may demonstrate a lack of quality control in predatory journals but may also indicate a lack of awareness and use of such linguistic norms by their authors. We also suggest that there is a need for the education of authors in science writing as this may enable them to publish in higher‐ranked and quality‐assured outlets.  相似文献   

18.
Over the years, the number of journals indexed in Scopus has increased, although it varies significantly between countries. The increasing proportion of international journals of a country provides new venues for papers from that country to be seen by other researchers worldwide. In this work, we evaluate the relationship of a country’s scientific performance or publication success with both its journals’ quantity and quality. The specific objective of the study is to identify the relationship between the country’s publication success and the quantity and quality of those country’s journals indexed in Scopus during 2005–2014. The publication success of 102 individual countries, measured by their scientific productivity, impact and collaboration indicators, the quantity of country’s Scopus-indexed journals in 2014 (a total of 22,581 journals) as well as the quantity of its journals were investigated. Scopus-indexed journals are predominantly from Western Europe (48.9%) and North America (27.7%), with the United States and the United Kingdom dominate with a total 51%. The contribution from the peripheral countries is comparatively small, however there are a good number of contributions from the South-East Asian countries. Estonia is the fastest growing country in terms of having indexed journals in Scopus, following by Iran and Malaysia. Among the studied indices, it was found that publication success (total publications and total citations) of 102 countries are strongly correlated with quantity (number of indexed journals and number of documents published in indexed journals) and quality (citations per paper, SJR, h-index, CiteScore and SNIP) indicators of country’s journals. We can conclude that the scientific productivity of a country depend critically on the number of journals indexed from that country in citation databases. The study provides a context with which the relative success of publications can be assessed, yielding new insights into the scientific impact of individual countries and the performance of journals that they published.  相似文献   

19.
推进我国医学期刊稿约的规范化   总被引:3,自引:0,他引:3  
将11种国内中文医学期刊与10种SCI收录的国外优秀期刊的稿约进行对比,分析其异同。结果显示:10种国外期刊稿约的信息量均较大,条款较多,各项条款的叙述详尽;而11种中文期刊中,部分期刊稿约过于简单,缺项较多。认为需要向国外优秀期刊学习,补充完善稿约,从而推进我国医学期刊稿约的规范化。  相似文献   

20.
军校学报可持续发展问题思考   总被引:3,自引:3,他引:0  
军校学报与地方学报相比,既有共性也有个性.从军校学报的特点出发,认为军校学报可持续发展应重视解决主编核心作用的发挥、刊物的保密性、特色及质量4个方面问题.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号