首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
In a recent paper in the Journal of Informetrics, Habibzadeh and Yadollahie [Habibzadeh, F., & Yadollahie, M. (2008). Journal weighted impact factor: A proposal. Journal of Informetrics, 2(2), 164–172] propose a journal weighted impact factor (WIF). Unlike the ordinary impact factor, the WIF of a journal takes into account the prestige or the influence of citing journals. In this communication, we show that the way in which Habibzadeh and Yadollahie calculate the WIF of a journal has some serious problems. Due to these problems, a ranking of journals based on WIF can be misleading. We also indicate how the problems can be solved by changing the way in which the WIF of a journal is calculated.  相似文献   

2.
Publication patterns of 79 forest scientists awarded major international forestry prizes during 1990-2010 were compared with the journal classification and ranking promoted as part of the ‘Excellence in Research for Australia’ (ERA) by the Australian Research Council. The data revealed that these scientists exhibited an elite publication performance during the decade before and two decades following their first major award. An analysis of their 1703 articles in 431 journals revealed substantial differences between the journal choices of these elite scientists and the ERA classification and ranking of journals. Implications from these findings are that additional cross-classifications should be added for many journals, and there should be an adjustment to the ranking of several journals relevant to the ERA Field of Research classified as 0705 Forestry Sciences.  相似文献   

3.
This paper provides a ranking of 69 marketing journals using a new Hirsch-type index, the hg-index which is the geometric mean of hg. The applicability of this index is tested on data retrieved from Google Scholar on marketing journal articles published between 2003 and 2007. The authors investigate the relationship between the hg-ranking, ranking implied by Thomson Reuters’ Journal Impact Factor for 2008, and rankings in previous citation-based studies of marketing journals. They also test two models of consumption of marketing journals that take into account measures of citing (based on the hg-index), prestige, and reading preference.  相似文献   

4.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

5.
This paper reviews a number of studies comparing Thomson Scientific’s Web of Science (WoS) and Elsevier’s Scopus. It collates their journal coverage in an important medical subfield: oncology. It is found that all WoS-covered oncological journals (n = 126) are indexed in Scopus, but that Scopus covers many more journals (an additional n = 106). However, the latter group tends to have much lower impact factors than WoS covered journals. Among the top 25% of sources with the highest impact factors in Scopus, 94% is indexed in the WoS, and for the bottom 25% only 6%. In short, in oncology the WoS is a genuine subset of Scopus, and tends to cover the best journals from it in terms of citation impact per paper. Although Scopus covers 90% more oncological journals compared to WoS, the average Scopus-based impact factor for journals indexed by both databases is only 2.6% higher than that based on WoS data. Results reflect fundamental differences in coverage policies: the WoS based on Eugene Garfield’s concepts of covering a selective set of most frequently used (cited) journals; Scopus with broad coverage, more similar to large disciplinary literature databases. The paper also found that ‘classical’, WoS-based impact factors strongly correlate with a new, Scopus-based metric, SCImago Journal Rank (SJR), one of a series of new indicators founded on earlier work by Pinski and Narin [Pinski, G., & Narin F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics. Information Processing and Management, 12, 297–312] that weight citations according to the prestige of the citing journal (Spearman’s rho = 0.93). Four lines of future research are proposed.  相似文献   

6.
This research study evaluates the quality of articles published by Saudi and expatriate authors in foreign Library and Information Science (LIS) journals using three popular metrics for ranking journals—Journal Impact Factor (JIF), SCImago Journal Rank (SJR), and Google Scholar Metrics (GSM). The reason for using multiple metrics is to see how closely or differently journals are ranked by the three different methods of citation analysis. However, the 2012 JIF list of journals is too small, almost half the size of the SJR and GSM lists, which inhibited one-to-one comparison among the impact factors of the thirty-six journals selected by Saudi authors for publishing articles. Only seventeen journals were found common to all the three lists, limiting the usefulness of the data. A basic problem is that Saudi LIS authors generally lack the level of competency in the English language required to achieve publication in the most prominent LIS journals. The study will have implications for authors, directors, and deans of all types of academic libraries; chairmen and deans of library schools; and the Saudi Library Association. Hopefully these entities will take necessary steps to prepare and motivate both academics and practicing librarians to improve the quality of their research and publications and thus get published in higher ranked journals.  相似文献   

7.
This paper focuses on a fresh and fair way to determine a ranking of science journals according to the “number of citations-to and articles published,” data used by SCI Journal Citation Reports of ISI to determine journal ranking by “impact factor.” Impact is considered a latent variable defined by a set of items (citations and articles published). The theoretical background is Item Response Theory, which suggests that, if we can understand how each item in a set of items operates with an object, then we can estimate a measure for the object. The Rasch model is the most common formulation of that theory. This technique is here applied to the citations and articles published of 62 medical journals (objects) to provide a Rasch measure for these journals which is compared with the current “impact factor” computation.  相似文献   

8.
The paper articulates the problems of journal publication in a relatively small country such as Romania where locally (i.e. nationally) published journals include most of the national medical scientific output. The starting point was a study ordered by the Cluj University of Medicine and Pharmacy Scientific Council, for the purpose of obtaining an objectively ranked list of all current Romanian biomedical journals that could be used in the evaluation of the scientific activity of the university academic staff. Sixty‐five current biomedical journals were identified—of which more than half were new titles that had appeared over the past 5 years. None of these are included in the Science Citation Index or Journal Citation Reports (JCR). A set of criteria was used for ranking the journals: peer review, inclusion in international databases, publication time lag, language of articles and abstracts, journal specific index and domestic impact factor. The period covered, along with tools and formulas used are presented. The problems of Romanian biomedical journals as well as ways of improving publishing standards are discussed. Also emphasized is the necessity for increased awareness in the medical scholarly community and the role of the library in this respect.  相似文献   

9.
In this paper we attempt to assess the impact of journals in the field of forestry, in terms of bibliometric data, by providing an evaluation of forestry journals based on data envelopment analysis (DEA). In addition, based on the results of the conducted analysis, we provide suggestions for improving the impact of the journals in terms of widely accepted measures of journal citation impact, such as the journal impact factor (IF) and the journal h-index. More specifically, by modifying certain inputs associated with the productivity of forestry journals, we have illustrated how this method could be utilized to raise their efficiency, which in terms of research impact can then be translated into an increase of their bibliometric indices, such as the h-index, IF or eigenfactor score.  相似文献   

10.
A size-independent indicator of journals’ scientific prestige, the SCImago Journal Rank (SJR) indicator, is proposed that ranks scholarly journals based on citation weighting schemes and eigenvector centrality. It is designed for use with complex and heterogeneous citation networks such as Scopus. Its computation method is described, and the results of its implementation on the Scopus 2007 dataset is compared with those of an ad hoc Journal Impact Factor, JIF(3y), both generally and within specific scientific areas. Both the SJR indicator and the JIF distributions were found to fit well to a logarithmic law. While the two metrics were strongly correlated, there were also major changes in rank. In addition, two general characteristics were observed. On the one hand, journals’ scientific influence or prestige as computed by the SJR indicator tended to be concentrated in fewer journals than the quantity of citation measured by JIF(3y). And on the other, the distance between the top-ranked journals and the rest tended to be greater in the SJR ranking than in that of the JIF(3y), while the separation between the middle and lower ranked journals tended to be smaller.  相似文献   

11.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

12.
Bibliometric data indexed through the Institute for Scientific Information were analyzed for 45 communication journals. Several measures were included to identify the most widely cited journals in the field, including (a) journal impact factor, (b) five-year journal impact, (c) article influence, and (d) journal relatedness. Results serve to expand on findings by Feeley (2008 Feeley , T. H. ( 2008 ). A bibliometric analysis of communication journals from 2002 to 2005 . Human Communication Research , 34 , 505520 .[Crossref], [Web of Science ®] [Google Scholar]) with respect to overall and within-field influence of communication journals whose analysis covered 2002 through 2005 and 19 journals. Results indicate stability in journal impact ratings over time and several journals (e.g., Communication Research, Human Communication Research, Journal of Communication, Communication Monographs, and Communication Theory) are highly central in the communication journal citation network.  相似文献   

13.
Citation averages, and Impact Factors (IFs) in particular, are sensitive to sample size. Here, we apply the Central Limit Theorem to IFs to understand their scale-dependent behavior. For a journal of n randomly selected papers from a population of all papers, we expect from the Theorem that its IF fluctuates around the population average μ, and spans a range of values proportional to σ/n, where σ2 is the variance of the population's citation distribution. The 1/n dependence has profound implications for IF rankings: The larger a journal, the narrower the range around μ where its IF lies. IF rankings therefore allocate an unfair advantage to smaller journals in the high IF ranks, and to larger journals in the low IF ranks. As a result, we expect a scale-dependent stratification of journals in IF rankings, whereby small journals occupy the top, middle, and bottom ranks; mid-sized journals occupy the middle ranks; and very large journals have IFs that asymptotically approach μ. We obtain qualitative and quantitative confirmation of these predictions by analyzing (i) the complete set of 166,498 IF & journal-size data pairs in the 1997–2016 Journal Citation Reports of Clarivate Analytics, (ii) the top-cited portion of 276,000 physics papers published in 2014–2015, and (iii) the citation distributions of an arbitrarily sampled list of physics journals. We conclude that the Central Limit Theorem is a good predictor of the IF range of actual journals, while sustained deviations from its predictions are a mark of true, non-random, citation impact. IF rankings are thus misleading unless one compares like-sized journals or adjusts for these effects. We propose the Φ index, a rescaled IF that accounts for size effects, and which can be readily generalized to account also for different citation practices across research fields. Our methodology applies to other citation averages that are used to compare research fields, university departments or countries in various types of rankings.  相似文献   

14.
This study analyzes the coverage of black studies journals as compared to women's studies journals in Web of Science, Academic Search Complete, and ArticleFirst. This study also examines black studies bibliographies and directories to identify journals that are suitable for inclusion in Journal Citation Reports and consequently Web of Science and evaluates seven black studies journals using the Thomson Reuters journal selection process.  相似文献   

15.
JCR五年期影响因子探析   总被引:6,自引:0,他引:6  
使用期刊引证报告(JCR)6015种期刊数据,以统计学方法探索性地分析5年期影响因子IF5的特点.结果显示,IF5作为具有代表性的平均性期刊评价指标,能更好地反映多数期刊被引高峰,总体符合布拉德福分布.IF5与2年期影响因子IF存在排序相关,也有显著统计学差异,两者测评结果在较好和较差期刊上相对一致,但在多数水平居中的期刊上存在区别.最后,给出Ifa指数测度两种影响因子的差别和Ifb指数综合两种影响因子的评价信息.  相似文献   

16.
This study determined how useful Google Scholar (GS) is for the evaluation of non‐English journals based on a sample of 150 Chinese journals listed in the Report on Chinese Academic Journals Evaluation of Research Center for Chinese Science Evaluation (2013–2014). This study investigated two disciplines: Library, Information & Documentation Science and Metallurgical Engineering & Technology. We collected data from GS and the Chongqing VIP database to evaluate GS as a citation database for Chinese journals on its resource coverage, journal ranking, and citation data. We found that GS covered 100% of the sample journals but indexed 22% more article records than the number of articles published. The ranking of Chinese journals by GS Metrics was not suitable to present a dependable ranking of Chinese journals. GS appeared suitable to provide an alternative source of Chinese citation data, even though there existed coverage problems, including article duplication and citation omission and potential duplication. The GS Metric average citation provided results highly correlated to traditional citation results, showing that it would be suitable for evaluating Chinese journals.  相似文献   

17.
This study presents a ranking of 182 academic journals in the field of artificial intelligence. For this, the revealed preference approach, also referred to as a citation impact method, was utilized to collect data from Google Scholar. This list was developed based on three relatively novel indices: h-index, g-index, and hc-index. These indices correlated almost perfectly with one another (ranging from 0.97 to 0.99), and they correlated strongly with Thomson's Journal Impact Factors (ranging from 0.64 to 0.69). It was concluded that journal longevity (years in print) is an important but not the only factor affecting an outlet's ranking position. Inclusion in Thomson's Journal Citation Reports is a must for a journal to be identified as a leading A+ or A level outlet. However, coverage by Thomson does not guarantee a high citation impact of an outlet. The presented list may be utilized by scholars who want to demonstrate their research output, various academic committees, librarians and administrators who are not familiar with the AI research domain.  相似文献   

18.
The arbitrariness of the h-index becomes evident, when one requires q × h instead of h citations as the threshold for the definition of the index, thus changing the size of the core of the most influential publications of a dataset. I analyze the citation records of 26 physicists in order to determine how much the prefactor q influences the ranking. Likewise, the arbitrariness of the highly-cited-publications indicator is due to the threshold value, given either as an absolute number of citations or as a percentage of highly cited papers. The analysis of the 26 citation records shows that the changes in the rankings in dependence on these thresholds are rather large and comparable with the respective changes for the h-index.  相似文献   

19.
Since its introduction, the Journal Impact Factor has probably been the most extensively adopted bibliometric indicator. Notwithstanding its well-known strengths and limits, it is still widely misused as a tool for evaluation, well beyond the purposes it was intended for. In order to shed further light on its nature, the present work studies how the correlation between the Journal Impact Factor and the (time-weighed) article Mean Received Citations (intended as a measure of journal performance) has evolved through time. It focuses on a sample of hard sciences and social sciences journals from the 1999 to 2010 time period. Correlation coefficients (Pearson's Coefficients as well as Spearman's Coefficients and Kendall's τα) are calculated and then tested against several null hypotheses. The results show that in most cases Journal Impact Factors and their yearly variations do not display a strong correlation with citedness. Differences also exist among scientific areas.  相似文献   

20.
Using the lists of journals indexed in the Current Index to Legal Periodicals from the last fifty years, this article analyzes the increase in the number of general law reviews, specialized law journals, student-edited journals, and peer-edited and refereed law journals during the last half-century. Data from the Current Index to Legal Periodicals were combined with further data collected from HeinOnline, American Bar Association (ABA) statistics, and U.S. News & World Report statistics. Comparison of these data not only shows a massive increase in the number of law journal titles being published but also suggests a correlation between the number of law journals published by a law school and its student population, length of time that it has been accredited by the ABA, and its U.S. News & World Report ranking. This article also contains a list of all law journals currently indexed by the Current Index to Legal Periodicals including each journal's date of initial publication, in addition to a list of all print student-edited law journals published by ABA-accredited and provisionally accredited U.S. law schools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号