首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 51 毫秒
1.
科学研究的目的在于创造知识,并应用理论成果解决我国社会、经济、文化等发展中的实际问题。将论文发表在国际期刊上可以让更多的国际同行了解我国最新的科研成果,为我国获得更多的国际影响力,所以在过去二十多年里SCI论文成为我国科研考核的一个重要指标。在这种科研评价导向下,我国学者发表的国际论文数量已居世界第一位,而大量来自国内同行的引用使得我国国际论文的被引量排名世界第二。本文提取1990至2015年Web of Science论文及其引文的数据,分析不同国家、不同学科在国家层次的自引情况,并在不同国家、不同学科之间进行比较。研究发现,在排除国内同行的自引后,我国国际论文的真实国际影响力仍然有限,除了临床医学和物理等少数学科外,其他学科仍然低于全球平均水平。  相似文献   

2.
The aim of the study is to explore the effects of the increase in the number of publications or citations on several impact indicators by a single journal paper or citation. The possible change of the h-index, A-index, R-index, π-index, π-rate, Journal Paper Citedness (JPC), and Citation Distribution Score (CDS) is followed by models. Particular attention is given to the increase of the indices by a single plus citation. The results obtained by the “successively built-up indicator” model show that with increasing number of citations or self-citations the indices may increase substantially.  相似文献   

3.
The Journal Impact Factor (JIF) is linearly sensitive to self-citations because each self-citation adds to the numerator, whereas the denominator is not affected. Pinski and Narin (1976) Influence Weights (IW) are not or marginally sensitive to these outliers on the main diagonal of a citation matrix and thus provide an alternative to JIFs. Whereas the JIFs are based on raw citation counts normalized by the number of publications in the previous two years, IWs are based on the eigenvectors in the matrix of aggregated journal-journal citations without a reference to size: the cited and citing sides are normalized and combined by a matrix approach. Upon normalization, IWs emerge as a vector; after recursive multiplication of the normalized matrix, IWs can be considered a network measure of prestige among the journals in the (sub)graph under study. As a consequence, the self-citations are integrated at the field level and no longer disturb the analysis as outliers. In our opinion, this independence of the diagonal values is a very desirable property of a measure of quality or impact. As an example, we elaborate Price’s (1981b) matrix of aggregated citation among eight biochemistry journals in 1977. Routines for the computation of IWs are made available at http://www.leydesdorff.net/iw.  相似文献   

4.
This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus.  相似文献   

5.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

6.
Analysis of 131 publications during 2006–2007 by staff of the School of Environmental Science and Management at Southern Cross University reveals that the journal impact factor, article length and type (i.e., article or review), and journal self-citations affect the citations accrued to 2012. Authors seeking to be well cited should aim to write comprehensive and substantial review articles, and submit them to journals with a high impact factor which has previously carried articles on the topic. Nonetheless, strategic placement of articles is complementary to, and no substitute for careful crafting of good quality research. Evidence remains equivocal regarding the contribution of an author's prior publication success (h-index) and of open-access journals.  相似文献   

7.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

8.
AND A FEW MORE     
Much of the recent library literature related to scholarly communication and predatory publishers has focused on faculty concerns regarding publishing in questionable journals for tenure or promotion purposes. However, little attention has been paid to predatory publishers in the context of student research and library instruction. The presence of predatory journals in library databases may put students at risk of including questionable content in their academic output. While the results of this study reveal that the number of predatory publishers and their associated journals are fairly small in the three article database packages and one directory that were examined, predatory journal content was more prevalent in one particular resource and in certain subject areas.  相似文献   

9.
One of the flaws of the journal impact factor (IF) is that it cannot be used to compare journals from different fields or multidisciplinary journals because the IF differs significantly across research fields. This study proposes a new measure of journal performance that captures field-different citation characteristics. We view journal performance from the perspective of the efficiency of a journal's citation generation process. Together with the conventional variables used in calculating the IF, the number of articles as an input and the number of total citations as an output, we additionally consider the two field-different factors, citation density and citation dynamics, as inputs. We also separately capture the contribution of external citations and self-citations and incorporate their relative importance in measuring journal performance. To accommodate multiple inputs and outputs whose relationships are unknown, this study employs data envelopment analysis (DEA), a multi-factor productivity model for measuring the relative efficiency of decision-making units without any assumption of a production function. The resulting efficiency score, called DEA-IF, can then be used for the comparative evaluation of multidisciplinary journals’ performance. A case study example of industrial engineering journals is provided to illustrate how to measure DEA-IF and its usefulness.  相似文献   

10.
A standard procedure in citation analysis is that all papers published in one year are assessed at the same later point in time, implicitly treating all publications as if they were published at the exact same date. This leads to systematic bias in favor of early-months publications and against late-months publications. This contribution analyses the size of this distortion on a large body of publications from all disciplines over citation windows of up to 15 years. It is found that early-month publications enjoy a substantial citation advantage, which arises from citations received in the first three years after publication. While the advantage is stronger for author self-citations as opposed to citations from others, it cannot be eliminated by excluding self-citations. The bias decreases only slowly over longer citation windows due to the continuing influence of the earlier years’ citations. Because of the substantial extent and long persistence of the distortions, it would be useful to remove or control for this bias in research and evaluation studies which use citation data. It is demonstrated that this can be achieved by using the newly introduced concept of month-based citation windows.  相似文献   

11.
Journal self-citations strongly affect journal evaluation indicators (such as impact factors) at the meso- and micro-levels, and therefore they are often increased artificially to inflate the evaluation indicators in journal evaluation systems. This coercive self-citation is a form of scientific misconduct that severely undermines the objective authenticity of these indicators. In this study, we developed the feature space for describing journal citation behavior and conducted feature selection by combining GA-Wrapper with RelifF. We also constructed a journal classification model using the logistic regression method to identify normal and abnormal journals. We evaluated the performance of the classification model using journals in three subject areas (BIOLOGY, MATHEMATICS and CHEMISTRY, APPLIED) during 2002–2011 as the test samples and good results were achieved in our experiments. Thus, we developed an effective method for the accurate identification of coercive self-citations.  相似文献   

12.
我国部分科技期刊参考文献和被引用情况统计分析   总被引:52,自引:7,他引:45  
通过对我国部分中文科技期刊的参考文献及被引用情况抽样统计分析,发现我国中文科技期刊的平均被引率普遍偏低,且作者自引对期刊的被引频次和影响因子贡献偏高。认为作者的引证行为失妥、人为删减参考文献及我国科技期刊的影响力偏低是上述现象的主要原因。  相似文献   

13.
This paper examines the characteristics of 462 open access (OA) journals being published in India under the green, gold and hybrid models. The sample of journals was selected from DOAJ, IndianJournal.com and Open J‐Gate. Journal characteristics were measured in terms of growth, subjects, publishers, and citations under each model. While characteristics such as growth, subject, and publisher have been identified by exploring the journal's website only, the citation count of these journals has been calculated by using Google Scholar and the Indian Citation Index. The gold road is now the most popular form of OA publishing in the subcontinent. There is a great variation in the size of OA journals and in their publishers. One publisher has more than 77 journals, but 264 publishers publish a single journal only. Overall, the OA journal landscape is greatly influenced by a few key publishers and journals. While 43% of journals charge publication fees and the fees vary from as low as US$10 to as high as US$400, the highest impact factor of the gold OA journals has been noted as 0.58. The data presented here suggest that publication fees are not a major barrier to authorship within the fields of computer science, pharmacy, and medicine.  相似文献   

14.
Metrics based on percentile ranks (PRs) for measuring scholarly impact involves complex treatment because of various defects such as overvaluing or devaluing an object caused by percentile ranking schemes, ignoring precise citation variation among those ranked next to each other, and inconsistency caused by additional papers or citations. These defects are especially obvious in a small-sized dataset. To avoid the complicated treatment of PRs based metrics, we propose two new indicators—the citation-based indicator (CBI) and the combined impact indicator (CII). Document types of publications are taken into account. With the two indicators, one would no more be bothered by complex issues encountered by PRs based indicators. For a small-sized dataset with less than 100 papers, special calculation is no more needed. The CBI is based solely on citation counts and the CII measures the integrate contributions of publications and citations. Both virtual and empirical data are used so as to compare the effect of related indicators. The CII and the PRs based indicator I3 are highly correlated but the former reflects citation impact more and the latter relates more to publications.  相似文献   

15.
刘明寿  戴国俊 《编辑学报》2013,25(3):279-282
通过分析我国农业高校学报与研究院所、学会主办学术期刊之间的差别,论证高校学报并非垃圾产品。将农业类学术期刊分为省属高校类、省级学会类和国家级学会类3种不同类型,综合分析比较3种不同类型农业类期刊在影响因子、基金论文比等5项指标上的差异。统计结果表明:国家级学会学术期刊总被引频次、影响因子、他引影响因子、基金论文比4个指标极显著地高于省级学会学术期刊(P<0.01);省属高校学报的影响因子、他引影响因子、基金论文也极显著地高于省级学会学术期刊(P<0.01),而且基金论文比、他引总引比略高于国家级学会学术期刊,差异不显著(P>0.05)。综合分析表明:国家级学会学术期刊的大部分指标高于省属高校学报,而省属高校学报均高于省级学会学术期刊,部分指标接近国家级学会学术期刊;近3年,农业类学术期刊总体上的各项评价指标正在逐年提高,农业高校学术期刊的综合影响力较高。  相似文献   

16.
方红   《编辑学报》2014,26(5):462-463
目前,国内期刊社关于论文作者稿酬的支付没有统一的标准,有的期刊社不支付稿酬,有的期刊社稿酬支付仍然沿用1999年发布的《出版文字作品报酬规定》,还有些期刊社采取优稿优酬制度,导致作者意见较多。为了更加公平合理地给期刊论文作者支付稿酬,建议国内期刊社可采用基本稿酬+被引频次支付部分稿酬的办法,并就其依据、实施方法、统计被引频次时间范围、不同期刊引用区别对待及其应用的可行性进行分析总结。  相似文献   

17.
In this paper we attempt to assess the impact of journals in the field of forestry, in terms of bibliometric data, by providing an evaluation of forestry journals based on data envelopment analysis (DEA). In addition, based on the results of the conducted analysis, we provide suggestions for improving the impact of the journals in terms of widely accepted measures of journal citation impact, such as the journal impact factor (IF) and the journal h-index. More specifically, by modifying certain inputs associated with the productivity of forestry journals, we have illustrated how this method could be utilized to raise their efficiency, which in terms of research impact can then be translated into an increase of their bibliometric indices, such as the h-index, IF or eigenfactor score.  相似文献   

18.
This study examines aspects of scholarly journal publishing in the Nordic countries. On average half of Nordic journals publish online. In most Nordic countries, commercial publishers predominate; however, in Finland the majority are society publishers. The number of open access journals is low, in line with international figures. There is concern to maintain local languages in journal publishing. A majority of the journals publishing in local languages are within social science, humanities, and arts; the STM sector publishes in English. English‐language publications are favoured in research assessments, international recognition, and impact, while the visibility of local‐language scholarly journals in international databases is low. The Nordbib program supports Nordic scholarly journals and fosters co‐operation with publishing companies and learned societies over migration to e‐publishing; it also supports open access. The article discusses future challenges for journal publishing, pointing out the problems of small journal publishers and the need for co‐operation between stakeholders.  相似文献   

19.
In the present paper the Percentage Rank Position (PRP) index concluding from the principle of Similar Distribution of Information Impact in different fields of science (Vinkler, 2013), is suggested to assess journals in different research fields comparatively. The publications in the journals dedicated to a field are ranked by citation frequency, and the PRP-index of the papers in the elite set of the field is calculated. The PRP-index relates the citation rank number of the paper to the total number of papers in the corresponding set. The sum of the PRP-index of the elite papers in a journal, PRP(j,F) may represent the eminence of the journal in the field. The non-parametric and non-dimensional PRP(j,F) index of journals is believed to be comparable across fields.  相似文献   

20.
Using 17 open-access journals published without interruption between 2000 and 2004 in the field of library and information science, this study compares the pattern of cited/citing hyperlinked references of Web-based scholarly electronic articles under various citation ranges in terms of language, file format, source and top-level domain. While the patterns of cited references were manually examined by counting the live hyperlinked-cited references, the patterns of citing references were examined by using the cited by tag in Google Scholar. The analysis indicates that although language, top-level domain, and file format of citations did not differ significantly for articles under different citation ranges, sources of citation differed significantly for articles in different citation ranges. Articles with fewer citations mostly cite less-scholarly sources such as Web pages, whereas articles with a higher number of citations mostly cite scholarly sources such as journal articles, etc. The findings suggest that 8 out of 17 OA journals in LIS have significant research impact in the scholarly communication process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号