首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
2.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

3.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

4.
Although there are at least six dimensions of journal quality, Beall's List identifies predatory Open Access journals based almost entirely on their adherence to procedural norms. The journals identified as predatory by one standard may be regarded as legitimate by other standards. This study examines the scholarly impact of the 58 accounting journals on Beall's List, calculating citations per article and estimating CiteScore percentile using Google Scholar data for more than 13,000 articles published from 2015 through 2018. Most Beall's List accounting journals have only modest citation impact, with an average estimated CiteScore in the 11th percentile among Scopus accounting journals. Some have a substantially greater impact, however. Six journals have estimated CiteScores at or above the 25th percentile, and two have scores at or above the 30th percentile. Moreover, there is considerable variation in citation impact among the articles within each journal, and high-impact articles (cited up to several hundred times) have appeared even in some of the Beall's List accounting journals with low citation rates. Further research is needed to determine how well the citing journals are integrated into the disciplinary citation network—whether the citing journals are themselves reputable or not.  相似文献   

5.
6.
OBJECTIVE: To quantify the impact of Pakistani Medical Journals using the principles of citation analysis. METHODS: References of articles published in 2006 in three selected Pakistani medical journals were collected and examined. The number of citations for each Pakistani medical journal was totalled. The first ranking of journals was based on the total number of citations; second ranking was based on impact factor 2006 and third ranking was based on the 5-year impact factor. Self-citations were excluded in all the three ratings. RESULTS: A total of 9079 citations in 567 articles were examined. Forty-nine separate Pakistani medical journals were cited. The Journal of the Pakistan Medical Association remains on the top in all three rankings, while Journal of College of Physicians and Surgeons-Pakistan attains second position in the ranking based on the total number of citations. The Pakistan Journal of Medical Sciences moves to second position in the ranking based on the impact factor 2006. The Journal of Ayub Medical College, Abbottabad moves to second position in the ranking based on the 5-year impact factor. CONCLUSION: This study examined the citation pattern of Pakistani medical journals. The impact factor, despite its limitations, is a valid indicator of quality for journals.  相似文献   

7.
8.
俞立平  张矿伟 《图书馆杂志》2021,(1):93-103,106
学术界目前大多从静态角度评价学术期刊影响力,比较缺乏动态角度的相关研究。文章借助牛顿第二定律的原理,旨在探索评价学术期刊动态影响力的指标和方法,从期刊影响速度、加速度及影响强度三个角度展开研究,并提出期刊影响强度的概念。以CSSCI经济学期刊为研究对象,基于中国知网CNKI的引文数据库,综合采用相关分析、回归分析、Kappa一致性检验等方法进行研究。研究结果表明:期刊影响强度可以作为一个期刊评价的新指标;期刊影响强度具有较好的区分度;期刊影响强度与h指数、篇均被引量具有正相关关系;建议采用期刊影响速度、加速度与影响强度评价期刊。  相似文献   

9.
One of the flaws of the journal impact factor (IF) is that it cannot be used to compare journals from different fields or multidisciplinary journals because the IF differs significantly across research fields. This study proposes a new measure of journal performance that captures field-different citation characteristics. We view journal performance from the perspective of the efficiency of a journal's citation generation process. Together with the conventional variables used in calculating the IF, the number of articles as an input and the number of total citations as an output, we additionally consider the two field-different factors, citation density and citation dynamics, as inputs. We also separately capture the contribution of external citations and self-citations and incorporate their relative importance in measuring journal performance. To accommodate multiple inputs and outputs whose relationships are unknown, this study employs data envelopment analysis (DEA), a multi-factor productivity model for measuring the relative efficiency of decision-making units without any assumption of a production function. The resulting efficiency score, called DEA-IF, can then be used for the comparative evaluation of multidisciplinary journals’ performance. A case study example of industrial engineering journals is provided to illustrate how to measure DEA-IF and its usefulness.  相似文献   

10.
Citation averages, and Impact Factors (IFs) in particular, are sensitive to sample size. Here, we apply the Central Limit Theorem to IFs to understand their scale-dependent behavior. For a journal of n randomly selected papers from a population of all papers, we expect from the Theorem that its IF fluctuates around the population average μ, and spans a range of values proportional to σ/n, where σ2 is the variance of the population's citation distribution. The 1/n dependence has profound implications for IF rankings: The larger a journal, the narrower the range around μ where its IF lies. IF rankings therefore allocate an unfair advantage to smaller journals in the high IF ranks, and to larger journals in the low IF ranks. As a result, we expect a scale-dependent stratification of journals in IF rankings, whereby small journals occupy the top, middle, and bottom ranks; mid-sized journals occupy the middle ranks; and very large journals have IFs that asymptotically approach μ. We obtain qualitative and quantitative confirmation of these predictions by analyzing (i) the complete set of 166,498 IF & journal-size data pairs in the 1997–2016 Journal Citation Reports of Clarivate Analytics, (ii) the top-cited portion of 276,000 physics papers published in 2014–2015, and (iii) the citation distributions of an arbitrarily sampled list of physics journals. We conclude that the Central Limit Theorem is a good predictor of the IF range of actual journals, while sustained deviations from its predictions are a mark of true, non-random, citation impact. IF rankings are thus misleading unless one compares like-sized journals or adjusts for these effects. We propose the Φ index, a rescaled IF that accounts for size effects, and which can be readily generalized to account also for different citation practices across research fields. Our methodology applies to other citation averages that are used to compare research fields, university departments or countries in various types of rankings.  相似文献   

11.
This research study evaluates the quality of articles published by Saudi and expatriate authors in foreign Library and Information Science (LIS) journals using three popular metrics for ranking journals—Journal Impact Factor (JIF), SCImago Journal Rank (SJR), and Google Scholar Metrics (GSM). The reason for using multiple metrics is to see how closely or differently journals are ranked by the three different methods of citation analysis. However, the 2012 JIF list of journals is too small, almost half the size of the SJR and GSM lists, which inhibited one-to-one comparison among the impact factors of the thirty-six journals selected by Saudi authors for publishing articles. Only seventeen journals were found common to all the three lists, limiting the usefulness of the data. A basic problem is that Saudi LIS authors generally lack the level of competency in the English language required to achieve publication in the most prominent LIS journals. The study will have implications for authors, directors, and deans of all types of academic libraries; chairmen and deans of library schools; and the Saudi Library Association. Hopefully these entities will take necessary steps to prepare and motivate both academics and practicing librarians to improve the quality of their research and publications and thus get published in higher ranked journals.  相似文献   

12.
Usage of field-normalized citation scores is a bibliometric standard. Different methods for field-normalization are in use, but also the choice of field-classification system determines the resulting field-normalized citation scores. Using Web of Science data, we calculated field-normalized citation scores using the same formula but different field-classification systems to answer the question if the resulting scores are different or similar. Six field-classification systems were used: three based on citation relations, one on semantic similarity scores (i.e., a topical relatedness measure), one on journal sets, and one on intellectual classifications. Systems based on journal sets and intellectual classifications agree on at least the moderate level. Two out of the three sets based on citation relations also agree on at least the moderate level. Larger differences were observed for the third data set based on citation relations and semantic similarity scores. The main policy implication is that normalized citation impact scores or rankings based on them should not be compared without deeper knowledge of the classification systems that were used to derive these values or rankings.  相似文献   

13.
期刊学术影响力、期刊对稿件的录用标准和期刊载文的学术影响力三者之间存在同向加强的机制,来自较高影响力期刊的引用具有较高的评价意义。作者的择刊引用和择刊发表使得较低学术影响力的期刊较少被较高影响力期刊引用。因而,可以通过同时考察构成期刊引证形象的施引期刊的学术影响力及其施引频次来评价被引期刊的学术影响力。以综合性期刊Nature和Science 2010年的引证形象为例,将期刊影响因子作为学术影响力的初评结果,提出了以施引频次对施引期刊影响因子加权的计算方法,以期通过量化的引证形象实现对期刊的评价。  相似文献   

14.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

15.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

16.
17.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

18.
We performed a citation analysis on the Web of Science publications consisting of more than 63 million articles and over a billion citations on 254 subjects from 1981 to 2020. We proposed the Article’s Scientific Prestige (ASP) metric and compared this metric to number of citations (#Cit) and journal grade in measuring the scientific impact of individual articles in the large-scale hierarchical and multi-disciplined citation network. In contrast to #Cit, ASP, that is computed based on the eigenvector centrality, considers both direct and indirect citations, and provides steady-state evaluation cross different disciplines. We found that ASP and #Cit are not aligned for most articles, with a growing mismatch amongst the less cited articles. While both metrics are reliable for evaluating the prestige of articles such as Nobel Prize winning articles, ASP tends to provide more persuasive rankings than #Cit when the articles are not highly cited. The journal grade, that is eventually determined by a few highly cited articles, is unable to properly reflect the scientific impact of individual articles. The number of references and coauthors are less relevant to scientific impact, but subjects do make a difference.  相似文献   

19.
This guide describes several information sources that can be used to assist faculty interested in quantitative and qualitative assessments of journal reputation and scholarly impact: Journal Citation Reports, Eigenfactor, Google Scholar Metrics, Elsevier Journal Metrics, Excellence in Research for Australia, Cabell’s International, Web of Science, Scopus, Google Scholar, and Beall’s List. It also introduces the indicators most often used to represent citation impact: impact factor, article influence score, eigenfactor, h5-index, source normalized impact per paper, impact per publication, and SCImago journal rank. Methods of assessing the influence of individual articles are also presented, along with strategies for the identification of predatory or low-quality journals.  相似文献   

20.
《期刊图书馆员》2013,64(3-4):51-73
ABSTRACT

This overview of the core concept applied to journals defines the relevant terminology and cites specific examples of core lists. Ten approaches for determining core journals (subjective judgment, use, indexing coverage, overlapping library holdings, citation data, citation network/co-citation analysis, production of articles, Bradford's Law, faculty publication data, and multiple criteria methods) are reviewed and the practical applications of core journals lists are explained. Theoretical and practical problems associated with the core concept and core journal lists are discussed and a taxonomy for classifying core journal lists is outlined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号