共查询到20条相似文献,搜索用时 31 毫秒
1.
《The Journal of Academic Librarianship》2017,43(5):434-442
This study uses citation data and survey data for 55 library and information science journals to identify three factors underlying a set of 11 journal ranking metrics (six citation metrics and five stated preference metrics). The three factors—three composite rankings—represent (1) the citation impact of a typical article, (2) subjective reputation, and (3) the citation impact of the journal as a whole (all articles combined). Together, they account for 77% of the common variance within the set of 11 metrics. Older journals (those founded before 1953) and nonprofit journals tend to have high reputation scores relative to their citation impact. Unlike previous research, this investigation shows no clear evidence of a distinction between the journals of greatest importance to scholars and those of greatest importance to practitioners. Neither group's subjective journal rankings are closely related to citation impact. 相似文献
2.
《Journal of Informetrics》2014,8(2):318-328
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets. 相似文献
3.
自然科学期刊自引对影响因子的"调控" 总被引:14,自引:0,他引:14
本文利用《中国科技期刊引证报告》,重新计算了其中几个学科的一些期刊除去自引后的影响因子,并对去除前和去除后的影响因子与期刊排名进行了对比,以考察期刊自引对影响因子和期刊排名的影响。调查发现目前个别期刊过度自引已经使期刊排名发生了失真。最后对如何遏制这种现象提出了一些建议。 相似文献
4.
OBJECTIVE: To quantify the impact of Pakistani Medical Journals using the principles of citation analysis. METHODS: References of articles published in 2006 in three selected Pakistani medical journals were collected and examined. The number of citations for each Pakistani medical journal was totalled. The first ranking of journals was based on the total number of citations; second ranking was based on impact factor 2006 and third ranking was based on the 5-year impact factor. Self-citations were excluded in all the three ratings. RESULTS: A total of 9079 citations in 567 articles were examined. Forty-nine separate Pakistani medical journals were cited. The Journal of the Pakistan Medical Association remains on the top in all three rankings, while Journal of College of Physicians and Surgeons-Pakistan attains second position in the ranking based on the total number of citations. The Pakistan Journal of Medical Sciences moves to second position in the ranking based on the impact factor 2006. The Journal of Ayub Medical College, Abbottabad moves to second position in the ranking based on the 5-year impact factor. CONCLUSION: This study examined the citation pattern of Pakistani medical journals. The impact factor, despite its limitations, is a valid indicator of quality for journals. 相似文献
5.
In economics the Research Papers in Economics (RePEc) network has become an essential source for the gathering and the spread of both existing and new economic research. Furthermore, it is currently the largest bibliometric database in economic sciences containing 33 different indicators for more than 30,000 economists. Based on this bibliographic information RePEc calculates well-known rankings for authors and academic institutions. We provide some cautionary remarks concerning the interpretation of some provided bibliometric measures in RePEc. Moreover, we show how individual and aggregated rankings can be biased due to the employed ranking methodology. In order to select key indicators describing and assessing research performance of scientist, we propose to apply principal component analysis in this data-rich environment. This approach allows us to assign weights to each indicator prior to aggregation. We illustrate the approach by providing a new overall ranking of economists based on RePEc data. 相似文献
6.
We investigate temporal factors in assessing the authoritativeness of web pages. We present three different metrics related
to time: age, event, and trend. These metrics measure recentness, special event occurrence, and trend in revisions, respectively.
An experimental dataset is created by crawling selected web pages for a period of several months. This data is used to compare
page rankings by human users with rankings computed by the standard PageRank algorithm (which does not include temporal factors)
and three algorithms that incorporate temporal factors, including the Time-Weighted PageRank (TWPR) algorithm introduced here. Analysis of the rankings shows that all three temporal-aware algorithms produce rankings more
like those of human users than does the PageRank algorithm. Of these, the TWPR algorithm produces rankings most similar to human users’, indicating that all three temporal factors are relevant in page
ranking. In addition, analysis of parameter values used to weight the three temporal factors reveals that age factor has the
most impact on page rankings, while trend and event factors have the second and the least impact. Proper weighting of the
three factors in TWPR algorithm provides the best ranking results. 相似文献
7.
Assessing the scholarly impact of academic institutions has become increasingly important. The achievements of editorial board members can create benchmarks for research excellence and can be used to evaluate both individual and institutional performance. This paper proposes a new method based on journal editor data for assessing an institution’s scholarly impact. In this paper, a journal editorship index (JEI) that simultaneously accounts for the journal rating (JR), editor title (ET), and board size (BS) is constructed. We assess the scholarly impact of economics institutions based on the editorial boards of 211 economics journals (which include 8640 editorial board members) in the ABS Academic Journal Guide. Three indices (JEI/ET, JEI/JR, and JEI/BS) are also used to rank the institutions. It was found that there was only a slight change in the relative institutional rankings using the JEI/ET and JEI/BS compared to the JEI. The BS and ET weight factors did not have a substantial influence on the ranking of institutions. It was also found that the journal rating weight factor had a large effect on the ranking of institutions. This paper presents an alternative approach to using editorial board memberships as the basis for assessing the scholarly impact of economics institutions. 相似文献
8.
Carlos Garcia-Zorita Ronald Rousseau Sergio Marugan-Lazaro Elias Sanz-Casado 《Journal of Informetrics》2018,12(3):567-578
Scientific journals are ordered by their impact factor while countries, institutions or researchers can be ranked by their scientific production, impact or by other simple or composite indicators as in the case of university rankings. In this paper, the theoretical framework proposed in Criado, R., Garcia, E., Pedroche, F. & Romance, M. (2013). A new method for comparing rankings through complex networks: Model and analysis of competitiveness of major European soccer leagues. Chaos, 23, 043114 for football competitions is used as a starting point to define a general index describing the dynamics or its opposite, stability, of rankings. Some characteristics to study rankings, ranking dynamics measures and axioms for such indices are presented. Furthermore, the notion of volatility of elements in rankings is introduced. Our study includes rankings with ties, entrants and leavers. Finally, some worked out examples are shown. 相似文献
9.
《Journal of Informetrics》2014,8(4):904-911
It is well-known that the distribution of citations to articles in a journal is skewed. We ask whether journal rankings based on the impact factor are robust with respect to this fact. We exclude the most cited paper, the top 5 and 10 cited papers for 100 economics journals and recalculate the impact factor. Afterwards we compare the resulting rankings with the original ones from 2012. Our results show that the rankings are relatively robust. This holds both for the 2-year and the 5-year impact factor. 相似文献
10.
《Library Acquisitions: Practice & Theory #》1994,18(4):447-458
Following a brief introduction of citation-based journal rankings as potential serials management tools, the most frequently used citation measure—impact factor—is explained. This paper then demonstrates a methodological bias inherent in averaging Social Sciences Citation Index Journal Citation Reports (SSCI JCR) impact factor data from two or more consecutive years. A possible method for correcting the bias, termed adjusted impact factor, is proposed. For illustration, a set of political science journals is ranked according to three different methods (crude averaging, weighted averaging, and adjusted impact factor) for combining SSCI JCR impact factor data from successive years. Although the correlations among the three methods are quite high, one can observe noteworthy differences in the rankings that could impact on collection development decisions. 相似文献
11.
The journal impact factor (JIF) has been questioned considerably during its development in the past half-century because of its inconsistency with scholarly reputation evaluations of scientific journals. This paper proposes a publication delay adjusted impact factor (PDAIF) which takes publication delay into consideration to reduce the negative effect on the quality of the impact factor determination. Based on citation data collected from Journal Citation Reports and publication delay data extracted from the journals’ official websites, the PDAIFs for journals from business-related disciplines are calculated. The results show that PDAIF values are, on average, more than 50% higher than JIF results. Furthermore, journal ranking based on PDAIF shows very high consistency with reputation-based journal rankings. Moreover, based on a case study of journals published by ELSEVIER and INFORMS, we find that PDAIF will bring a greater impact factor increase for journals with longer publication delay because of reducing that negative influence. Finally, insightful and practical suggestions to shorten the publication delay are provided. 相似文献
12.
A scholar can be identified with his citation list and a scholar ranking is a complete order on the set of scholars. We characterize those scholar rankings that admit a measure representation. 相似文献
13.
Michael Halperin Robert Hebert Edward J. Lusk 《Journal of Business & Finance Librarianship》2013,18(1):47-62
We created a matrix of rankings of MBA curricula by six publishers and used a standard SAS program to supply missing data. We then examined the resulting construct to assess the publishers’ ranking similarity and their change over a four year period. 相似文献
14.
Rankings of journals and rankings of scientists are usually discussed separately. We argue that a consistent approach to both rankings is desirable because both the quality of a journal and the quality of a scientist depend on the papers it/he publishes. We present a pair of consistent rankings (impact factor for the journals and total number of citations for the authors) and we provide an axiomatic characterization thereof. 相似文献
15.
16.
学科排名对美国iSchool教师就业的影响 总被引:1,自引:0,他引:1
[目的/意义] 探索学科排名对美国iSchool教师就业的影响,以发现学科建设现状,为我国图情档学科的师资建设提供参考。[方法/过程] 在教师就职流动性视角下,分析美国27个iSchool的880位教师的现任学校与毕业院校的学科排名前后变化。[结果/结论] 研究发现:学科排名越高的毕业生有越多的就业选择权和机会,大部分iSchool毕业生工作于比毕业学校排名低的学校,iSchool教师就业存在明显性别差异,男性比女性更容易就职于排名高的学校。我国建设"双一流"过程中,应该着手利用评价和排名给师资建设带来的积极影响,关注并改善负面影响。 相似文献
17.
影响因子分数平均值:一个评价学术论文质量的新指标 总被引:3,自引:0,他引:3
介绍影响因子分数平均值的概念,并用2004年SCI收录的我国大学、科研机构和医疗机构的科技论文数据进行分析讨论.认为影响因子分数平均值比较适于评价不同学科间的研究工作,是一个新的评价学术论文质量的较好的量化指标. 相似文献
18.
19.
20.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of university rankings have been proposed to quantify it, yet the debate on what rankings are exactly measuring is enduring.To address the issue we have measured a quantitative and reliable proxy of the academic reputation of a given institution and compared our findings with well-established impact indicators and academic rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Given the volume of the data analysed, our findings are statistically sound and less prone to bias, than, for instance, ad–hoc surveys often employed by ranking bodies in order to attain similar outcomes. The approach proposed in our paper may contribute to enhance ranking methodologies, by reconciling the qualitative evaluation of academic prestige with its quantitative measurements via publication impact. 相似文献