首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
This study compares the two-year impact factor (JIF2), JIF2 without journal self-citation (JIF2_noJSC), five-year impact factor (JIF5), eigenfactor score and article influence score (AIS) and investigates their relative changes with time. JIF2 increased faster than JIF5 overall. The relative change between JIF2 and JIF_noJSC shows that the control of JCR over journal self-citation is effective to some extent. JIF5 is more discriminative than JIF2. The correlation between JIF5 and AIS is stronger than that between JIF5 and the eigenfactor score. The relative change in journal rank according to different indicators varies with the ratio of the indicators and can be up to 60 % of the number of journals in a subject category. There is subject category discrepancy in the average AIS and its change over time. Through the screening of journals according to variations in the ratio of JIF2 to JIF5 for journals in individual subject categories, we found that journals in the same subject categories can have considerably different citation patterns. To provide a fair comparison of journals in individual subject categories, we argue that it is better to replace JIF2 with the ready-made JIF5 when ranking journals.  相似文献   

2.
期刊引用认同指标在期刊评价中的适用性分析   总被引:1,自引:1,他引:0  
论文以CSSCI图书情报领域的18种期刊为例,以这些期刊在2009年全年登载论文的参考文献为研究对象,从CSSCI数据库中获取数据,统计分析各期刊的引用认同。结果显示:期刊引用认同指标(引文量、篇均引文量、英文引文比、期刊引用广度、自施引率、引用半衰期、期刊集中因子、认同期刊影响力等指标)与CSSCI来源期刊定量与定性评价指标并不明显相关,但这类指标可以反映期刊载文的内容特征与偏好、对国外科学文献和对其他学科文献的利用程度、期刊的办刊定位、学科的发展模式等等,在综合评价期刊方面具有一定意义。  相似文献   

3.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided.  相似文献   

4.
The paper introduces a new journal impact measure called The Reference Return Ratio (3R). Unlike the traditional Journal Impact Factor (JIF), which is based on calculations of publications and citations, the new measure is based on calculations of bibliographic investments (references) and returns (citations). A comparative study of the two measures shows a strong relationship between the 3R and the JIF. Yet, the 3R appears to correct for citation habits, citation dynamics, and composition of document types – problems that typically are raised against the JIF. In addition, contrary to traditional impact measures, the 3R cannot be manipulated ad infinitum through journal self-citations.  相似文献   

5.
This study established a technological impact factor (TIF) derived from journal impact factor (JIF), which is proposed to evaluate journals from the aspect of practical innovation. This impact factor mainly examines the influence of journal articles on patents by calculating the number of patents cited to a journal divided by the number of articles published in that particular journal. The values of TIF for five-year (TIF5) and ten-year (TIF10) periods at the journal level and aggregated TIF values (TIFAGG_5 and TIFAGG_10) at the category level were provided and compared to the JIF. The results reveal that journals with higher TIF values showed varied performances in the JCR, while the top ten journals on JIF5 showed consistent good performance in TIFs. Journals in three selected categories – Electrical & Electronic Engineering, Research & Experimental Medicine, and Organic Chemistry – showed that TIF5 and TIF10 values are not strongly correlated with JIF5. Thus, TIFs can provide a new indicator for evaluating journals from the aspect of practical innovation.  相似文献   

6.
期刊学术影响力、期刊对稿件的录用标准和期刊载文的学术影响力三者之间存在同向加强的机制,来自较高影响力期刊的引用具有较高的评价意义。作者的择刊引用和择刊发表使得较低学术影响力的期刊较少被较高影响力期刊引用。因而,可以通过同时考察构成期刊引证形象的施引期刊的学术影响力及其施引频次来评价被引期刊的学术影响力。以综合性期刊Nature和Science 2010年的引证形象为例,将期刊影响因子作为学术影响力的初评结果,提出了以施引频次对施引期刊影响因子加权的计算方法,以期通过量化的引证形象实现对期刊的评价。  相似文献   

7.
[目的/意义]针对现有评价方法对期刊影响因素考虑不全面的问题,提出一种将期刊被引频次、被引时间异质性与被引分布均衡性相结合的期刊综合评价方法。[方法/过程]首先,基于期刊被引时间的异质性计算加权篇均被引频次;其次,利用改进后的泰尔指数衡量期刊被引分布的均衡性;最后,利用熵权法与灰色关联分析法将被引时间异质性与被引频次均衡性相结合,构建期刊综合评价指数——关联度指数(Relevance Index,简称RI)。[结果/结论]通过对国外图书情报领域的40种期刊进行实证分析,结果发现:相比于JIF和h指数,RI指数能够考虑期刊被引时间的异质性,时效性更强,权重分配更合理;RI指数能兼顾期刊被引分布的均衡性,能够识别平均影响力较强的期刊,评价结果更加客观、全面。  相似文献   

8.
Although there are at least six dimensions of journal quality, Beall's List identifies predatory Open Access journals based almost entirely on their adherence to procedural norms. The journals identified as predatory by one standard may be regarded as legitimate by other standards. This study examines the scholarly impact of the 58 accounting journals on Beall's List, calculating citations per article and estimating CiteScore percentile using Google Scholar data for more than 13,000 articles published from 2015 through 2018. Most Beall's List accounting journals have only modest citation impact, with an average estimated CiteScore in the 11th percentile among Scopus accounting journals. Some have a substantially greater impact, however. Six journals have estimated CiteScores at or above the 25th percentile, and two have scores at or above the 30th percentile. Moreover, there is considerable variation in citation impact among the articles within each journal, and high-impact articles (cited up to several hundred times) have appeared even in some of the Beall's List accounting journals with low citation rates. Further research is needed to determine how well the citing journals are integrated into the disciplinary citation network—whether the citing journals are themselves reputable or not.  相似文献   

9.
This study uses citation data and survey data for 55 library and information science journals to identify three factors underlying a set of 11 journal ranking metrics (six citation metrics and five stated preference metrics). The three factors—three composite rankings—represent (1) the citation impact of a typical article, (2) subjective reputation, and (3) the citation impact of the journal as a whole (all articles combined). Together, they account for 77% of the common variance within the set of 11 metrics. Older journals (those founded before 1953) and nonprofit journals tend to have high reputation scores relative to their citation impact. Unlike previous research, this investigation shows no clear evidence of a distinction between the journals of greatest importance to scholars and those of greatest importance to practitioners. Neither group's subjective journal rankings are closely related to citation impact.  相似文献   

10.
The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers. The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations. Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF. To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low. However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.  相似文献   

11.
A size-independent indicator of journals’ scientific prestige, the SCImago Journal Rank (SJR) indicator, is proposed that ranks scholarly journals based on citation weighting schemes and eigenvector centrality. It is designed for use with complex and heterogeneous citation networks such as Scopus. Its computation method is described, and the results of its implementation on the Scopus 2007 dataset is compared with those of an ad hoc Journal Impact Factor, JIF(3y), both generally and within specific scientific areas. Both the SJR indicator and the JIF distributions were found to fit well to a logarithmic law. While the two metrics were strongly correlated, there were also major changes in rank. In addition, two general characteristics were observed. On the one hand, journals’ scientific influence or prestige as computed by the SJR indicator tended to be concentrated in fewer journals than the quantity of citation measured by JIF(3y). And on the other, the distance between the top-ranked journals and the rest tended to be greater in the SJR ranking than in that of the JIF(3y), while the separation between the middle and lower ranked journals tended to be smaller.  相似文献   

12.
This research study evaluates the quality of articles published by Saudi and expatriate authors in foreign Library and Information Science (LIS) journals using three popular metrics for ranking journals—Journal Impact Factor (JIF), SCImago Journal Rank (SJR), and Google Scholar Metrics (GSM). The reason for using multiple metrics is to see how closely or differently journals are ranked by the three different methods of citation analysis. However, the 2012 JIF list of journals is too small, almost half the size of the SJR and GSM lists, which inhibited one-to-one comparison among the impact factors of the thirty-six journals selected by Saudi authors for publishing articles. Only seventeen journals were found common to all the three lists, limiting the usefulness of the data. A basic problem is that Saudi LIS authors generally lack the level of competency in the English language required to achieve publication in the most prominent LIS journals. The study will have implications for authors, directors, and deans of all types of academic libraries; chairmen and deans of library schools; and the Saudi Library Association. Hopefully these entities will take necessary steps to prepare and motivate both academics and practicing librarians to improve the quality of their research and publications and thus get published in higher ranked journals.  相似文献   

13.
Research evaluation based on bibliometrics is prevalent in modern science. However, the usefulness of citation counts for measuring research impact has been questioned for many years. Empirical studies have demonstrated that the probability of being cited might depend on many factors that are not related to the accepted conventions of scholarly publishing. The current study investigates the relationship between the performance of universities in terms of field-normalized citation impact (NCS) and four factors (FICs) with possible influences on the citation impact of single papers: journal impact factor (JIF), number of pages, number of authors, and number of cited references. The study is based on articles and reviews published by 49 German universities in 2000, 2005 and 2010. Multilevel regression models have been estimated, since multiple levels of data have been analyzed which are on the single paper and university level. The results point to weak relationships between NCSs and number of authors, number of cited references, number of pages, and JIF. Thus, the results demonstrate that there are similar effects of all FICs on NCSs in universities with high or low NCSs. Although other studies revealed that FICs might be effective on the single paper level, the results of this study demonstrate that they are not effective on the aggregated level (i.e., on the institutional NCSs level).  相似文献   

14.
论文发表时滞与优先数字出版   总被引:1,自引:1,他引:0  
李江  伍军红 《编辑学报》2011,23(4):357-358
将论文发表时滞分为审稿时滞与等待印刷时滞,解释了论文从投稿到发表的过程中各个环节所产生的时滞及其所产生的负面影响。分析优先数字出版在大幅缩短论文发表时滞方面的功能与意义,统计表明,优先数字出版能将期刊影响因子提高约15%。提出了优先数字出版中值得讨论的问题。  相似文献   

15.
付中静 《出版科学》2016,24(4):77-82
收集 Web of Science(WoS)数据库的高被引撤销论文数据,分析其分布规律和引证特征。结果发现,TOP20%高被引撤销论文430篇,分布于31个国家,多学科领域最多,35种期刊>3篇。高被引撤销论文撤销时滞和撤销论文总被引频次相关性较弱(P=0.014),和撤销前被引频次相关性较强(P=0.000)。期刊 IF和撤销论文数量、撤销论文总被引频次、撤销论文篇均被引频次正相关(P=0.017、P=0.000、P=0.005)。撤销后年均被引频次低于撤销前(P=0.000)。本研究说明 IF 高的期刊发表的撤销论文对学术界带来的负面影响较大,撤销时滞延长增加了撤销前引用,撤销起到了一定的净化效果,但是净化效果还不理想,建议国内外学者加强对撤销论文及其不良影响的关注。  相似文献   

16.
The purpose of the study was to investigate and compare the social media (SM) impact of 273 South Africa Post-Secondary Education accredited journals, which are recognised by the Department of Higher Education and Training of South Africa for purposes of financial support. We used multiple sources to extract data for the study, namely, Altmetric.com, Google Scholar (GS), Scopus (through SCImago) and the Thomson Reuters (TR) Journal Citation Reports (JCR). Data was analysed to determine South African journals’ presence in and impact on SM as well as to contrast SM visibility and impact with the citation impact in GS, JCR and Scopus. The Spearman correlation test was performed to compare the impact of the journals on SM and other sources. The results reveal that 2923 articles published in 122 of the 273 South African (SA) journals have received at least one mention in SM; the most commonly used SM platforms were Twitter and Facebook; the journals indexed in the TR’s citation indexes and Scopus performed much better, in terms of their average altmetrics, than non-TR and non-Scopus indexed journals; and there were weak to moderate relationships among different types of altmetrics and citation-based measures, thereby implying different kinds of journal impacts on SM when compared to the scholarly impact reflected in citation databases. In conclusion, South African journals’ impact on SM, just as is the case with countries with similar economies, is minimal but has shown signs of growth.  相似文献   

17.
OBJECTIVE: To quantify the impact of Pakistani Medical Journals using the principles of citation analysis. METHODS: References of articles published in 2006 in three selected Pakistani medical journals were collected and examined. The number of citations for each Pakistani medical journal was totalled. The first ranking of journals was based on the total number of citations; second ranking was based on impact factor 2006 and third ranking was based on the 5-year impact factor. Self-citations were excluded in all the three ratings. RESULTS: A total of 9079 citations in 567 articles were examined. Forty-nine separate Pakistani medical journals were cited. The Journal of the Pakistan Medical Association remains on the top in all three rankings, while Journal of College of Physicians and Surgeons-Pakistan attains second position in the ranking based on the total number of citations. The Pakistan Journal of Medical Sciences moves to second position in the ranking based on the impact factor 2006. The Journal of Ayub Medical College, Abbottabad moves to second position in the ranking based on the 5-year impact factor. CONCLUSION: This study examined the citation pattern of Pakistani medical journals. The impact factor, despite its limitations, is a valid indicator of quality for journals.  相似文献   

18.
19.
Journal weighted impact factor: A proposal   总被引:3,自引:0,他引:3  
The impact factor of a journal reflects the frequency with which the journal's articles are cited. It is the best available measure of journal quality. For calculation of impact factor, we just count the number of citations, no matter how prestigious the citing journal is. We think that impact factor as a measure of journal quality, may be improved if in its calculation, we not only take into account the number of citations, but also incorporate a factor reflecting the prestige of the citing journals relative to the cited journal. In calculation of this proposed “weighted impact factor,” each citation has a coefficient (weight) the value of which is 1 if the citing journal is as prestigious as the cited journal; is >1 if the citing journal is more prestigious than the cited journal; and is <1 if the citing journal has a lower standing than the cited journal. In this way, journals receiving many citations from prestigious journals are considered prestigious themselves and those cited by low-status journals seek little credit. By considering both the number of citations and the prestige of the citing journals, we expect the weighted impact factor be a better scientometrics measure of journal quality.  相似文献   

20.
One of the flaws of the journal impact factor (IF) is that it cannot be used to compare journals from different fields or multidisciplinary journals because the IF differs significantly across research fields. This study proposes a new measure of journal performance that captures field-different citation characteristics. We view journal performance from the perspective of the efficiency of a journal's citation generation process. Together with the conventional variables used in calculating the IF, the number of articles as an input and the number of total citations as an output, we additionally consider the two field-different factors, citation density and citation dynamics, as inputs. We also separately capture the contribution of external citations and self-citations and incorporate their relative importance in measuring journal performance. To accommodate multiple inputs and outputs whose relationships are unknown, this study employs data envelopment analysis (DEA), a multi-factor productivity model for measuring the relative efficiency of decision-making units without any assumption of a production function. The resulting efficiency score, called DEA-IF, can then be used for the comparative evaluation of multidisciplinary journals’ performance. A case study example of industrial engineering journals is provided to illustrate how to measure DEA-IF and its usefulness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号