首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 296 毫秒
1.
基于F1000与WoS的同行评议与文献计量相关性研究   总被引:1,自引:1,他引:0  
为比较同行评议与文献计量方法在科学评价中的有效性及相关性,选取F1000以及Web of Science数据库,采用SPSS16.0软件,将近2000篇论文的F1000因子与Web of Science数据库中指标进行相关性比较。结果显示,F1000因子与统计区间内的被引频次呈显著正相关,同时一些F1000因子很高的论文并没有高频被引,反之亦然。结论指出:从统计学的视角,文献计量指标与同行评议结果具有正向相关性,但是无论是同行评议还是文献计量,单独作为科学评价标准都会有失偏颇,以引文分析为代表的定量指标与同行评议方法的结合将是未来科学评价的主流。  相似文献   

2.
本文探索了一个新的情报学研究理论--选择性计量学,对选择性计量学的内涵、特性和国内外的相关研究进行了详细论述.选取Mendeley、F1000和Google Scholar三种学术社交网络工具中的不同类型的选择性计量方法评价同一组论文的结果一致性进行了验证.研究结果表明Mendeley的读者人数指标与Google Scholar的被引次数指标在论文评价结果的相关程度相对较高.  相似文献   

3.
期刊评价指标实证研究   总被引:1,自引:0,他引:1  
本文选取F1000数据库中免疫学与生物信息学论文近2000篇,通过相关分析、聚类分析以及因子分析将对应260种期刊的影响因子、5年期影响因子、特征因子、论文影响分值、即时指数、SJR、SNIP、期刊h-指数进行相关性检验及分类,并对各项指标与同行评议结果即F1000因子进行相似性比较.结果表明各项指标虽源于WoS和Scopus不同数据库,计算方法也不尽相同,但其间具有较好的一致性,从而为WoS与Scopus在科学评价中的可选择性与替代性提供依据.本文论述中结合期刊评价进化历程,并通过各项指标优缺点的剖析,指出期刊评价的发展趋向.  相似文献   

4.
从期刊被引频次的角度出发,采取实证研究的方法,选取国际权威的引文数据库Web of Science和著名的搜索引擎Google Scholar,以<美国信息科学和技术学会杂志>为文献源,对Web of Science和Google Scholar两个引文分析工具进行比较和探讨.  相似文献   

5.
以引文数据库Web of Science以及Scopus为文献源,采用文献计量学的方法,针对2004年至今的以Google Scholar为研究主题的文献,从文献量及年代分布、文献类型及语种、作者、期刊、文献被引和主题等方面进行分析,揭示Google Scholar的研究现状。  相似文献   

6.
文章运用文献计量方法,以美国科学情报研究所开发的Web of Science数据库为数据源基础,检索了有关研究影响因子的文献,并从文献量、著者、机构、核心期刊、引文等角度进行了统计和分析,以定量数据从侧面反映了影响因子这一评价期刊质量的指标在其被提出之后30余年内的发展情况。  相似文献   

7.
通过对比F1000因子与被引频次、F1000因子与期刊评价指标,并对主要指标进行相关性分析,来验证同行评议与引文分析间的相关性。结果表明,F1000因子与被引频次呈正相关性,即专家打分与被引频次变动方向相同;但也存在专家打分高的论文被引频次低,而专家打分低的论文被引频次高的事实。相关性分析的结果表明:在特征因子、SNIP等主要指标中,SJR、IF与专家评议值相关度最大。  相似文献   

8.
关于Google Scholar与Web of Science引文分析的实证研究   总被引:2,自引:0,他引:2  
从期刊被引频次的角度出发,采取实证研究的方法,选取国际权威的引文数据库Web of Science和著名的搜索引擎Google Scholar,以《美国信息科学和技术学会杂志》为文献源进行相关分析。  相似文献   

9.
[目的/意义]分析学科规范引文影响力在科学评价中的可行性及其与同行评议的相关性,为负责任计量及以其为支撑的同行评议提供借鉴。[方法/过程]选取F1000以及InCites平台,将29 850篇细胞生物学文献、30 326篇生物技术文献的CNCI (学科规范化引文影响力)与被引频次进行相关分析,对其中956篇细胞生物学论文的CNCI与F1000分值进行斯皮尔曼相关系数检验。[结果/结论]研究结果表明,从统计学视角看CNCI与被引频次呈高度正相关,与F1000分值呈显著正相关,同时亦存在二者相悖的情形。因此,CNCI在一定程度上能够反映同行评议结果、能代偿实施学术影响力归誉的功能,并适用于跨学科比较;但同行评议或CNCI单独作为科学评价标准都会有失偏颇,以CNCI为代表的新一代负责任计量指标为支撑的同行评议将成为未来科学评价的主流。  相似文献   

10.
首先阐述在数字科研时代,在学术交流渠道多元化、网络化的环境下,谷歌学术搜索(Google Scholar)可作为计算机科学领域引文分析的数据源的原因。然后,对目前自动化地采集Google Scholar的引文数据的现状进行概述;进而以统计图灵奖获得者所发表论文的年度引文频次为例,着重阐述如何利用Google Scholar引擎的引文搜索功能设计相关程序,实现对论文各年度被引频次的自动统计;最后,将该方法与Web of Science进行比较,并对实现过程中遇到的问题进行总结。  相似文献   

11.
Altmetrics from Altmetric.com are widely used by publishers and researchers to give earlier evidence of attention than citation counts. This article assesses whether Altmetric.com scores are reliable early indicators of likely future impact and whether they may also reflect non-scholarly impacts. A preliminary factor analysis suggests that the main altmetric indicator of scholarly impact is Mendeley reader counts, with weaker news, informational and social network discussion/promotion dimensions in some fields. Based on a regression analysis of Altmetric.com data from November 2015 and Scopus citation counts from October 2017 for articles in 30 narrow fields, only Mendeley reader counts are consistent predictors of future citation impact. Most other Altmetric.com scores can help predict future impact in some fields. Overall, the results confirm that early Altmetric.com scores can predict later citation counts, although less well than journal impact factors, and the optimal strategy is to consider both Altmetric.com scores and journal impact factors. Altmetric.com scores can also reflect dimensions of non-scholarly impact in some fields.  相似文献   

12.
[目的/意义]通过分析某个学科领域中Altmetrics指标的特征,为该领域文献影响力评价提供更加科学合理的指标体系。[方法/过程]定位于图书情报领域,选取Scopus、Altmetric.com进行文献被引频次及Altmetrics指标值的采集,对数据进行统计分析、聚类分析和内容分析。[结果/结论]在众多Altmetrics指标中,Mendeley和Twitter更适合于对图书情报领域文献的影响力做出评价;Mendeley和Twitter中文献的使用群体、文献主题、内容和期刊分布都存在明显的差异性;Twitter适合对文献的社会影响力做出判断,Mendeley更适用于文献的学术影响力评价;不同工具的流行程度存在地域差异,利用Altmetrics指标时应考虑该指标对文献影响力的评价是否存在地域缺失。  相似文献   

13.
Dissertations can be the single most important scholarly outputs of junior researchers. Whilst sets of journal articles are often evaluated with the help of citation counts from the Web of Science or Scopus, these do not index dissertations and so their impact is hard to assess. In response, this article introduces a new multistage method to extract Google Scholar citation counts for large collections of dissertations from repositories indexed by Google. The method was used to extract Google Scholar citation counts for 77,884 American doctoral dissertations from 2013 to 2017 via ProQuest, with a precision of over 95%. Some ProQuest dissertations that were dual indexed with other repositories could not be retrieved with ProQuest-specific searches but could be found with Google Scholar searches of the other repositories. The Google Scholar citation counts were then compared with Mendeley reader counts, a known source of scholarly-like impact data. A fifth of the dissertations had at least one citation recorded in Google Scholar and slightly fewer had at least one Mendeley reader. Based on numerical comparisons, the Mendeley reader counts seem to be more useful for impact assessment purposes for dissertations that are less than two years old, whilst Google Scholar citations are more useful for older dissertations, especially in social sciences, arts and humanities. Google Scholar citation counts may reflect a more scholarly type of impact than that of Mendeley reader counts because dissertations attract a substantial minority of their citations from other dissertations. In summary, the new method now makes it possible for research funders, institutions and others to systematically evaluate the impact of dissertations, although additional Google Scholar queries for other online repositories are needed to ensure comprehensive coverage.  相似文献   

14.
In this paper we present a first large-scale analysis of the relationship between Mendeley readership and citation counts with particular documents’ bibliographic characteristics. A data set of 1.3 million publications from different fields published in journals covered by the Web of Science (WoS) has been analyzed. This work reveals that document types that are often excluded from citation analysis due to their lower citation values, like editorial materials, letters, news items, or meeting abstracts, are strongly covered and saved in Mendeley, suggesting that Mendeley readership can reliably inform the analysis of these document types. Findings show that collaborative papers are frequently saved in Mendeley, which is similar to what is observed for citations. The relationship between readership and the length of titles and number of pages, however, is weaker than for the same relationship observed for citations. The analysis of different disciplines also points to different patterns in the relationship between several document characteristics, readership, and citation counts. Overall, results highlight that although disciplinary differences exist, readership counts are related to similar bibliographic characteristics as those related to citation counts, reinforcing the idea that Mendeley readership and citations capture a similar concept of impact, although they cannot be considered as equivalent indicators.  相似文献   

15.
H指数和G指数——期刊学术影响力评价的新指标   总被引:30,自引:0,他引:30  
首先利用《中文社会科学引文索引》的检索数据,以图书馆学情报学部分期刊为例,比较各期刊H指数和相对H指数的大小及其特点;其次利用CNKI的系列引文数据库,对部分图书馆学情报学和管理学期刊的G指数进行比较研究,认为H指数与相对H指数、相对H指数与影响因子之间存在较大的相关性;最后指出使用H指数、相对H指数和G指数应坚持同类相比原则,在期刊评价中应慎重使用。  相似文献   

16.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIFs) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behavior across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.  相似文献   

17.
The Web impact of open access social science research   总被引:1,自引:0,他引:1  
For a long time, Institute for Scientific Information (ISI) journal citations have been widely used for research performance monitoring of the sciences. For the social sciences, however, the Social Sciences Citation Index® (SSCI®) can sometimes be insufficient. Broader types of publications (e.g., books and non-ISI journals) and informal scholarly indicators may also be needed. This article investigates whether the Web can help to fill this gap. The authors analyzed 1530 citations from Google™ to 492 research articles from 44 open access social science journals. The articles were published in 2001 in the fields of education, psychology, sociology, and economics. About 19% of the Web citations represented formal impact equivalent to journal citations, and 11% were more informal indicators of impact. The average was about 3 formal and 2 informal impact citations per article. Although the proportions of formal and informal online impact were similar in sociology, psychology, and education, economics showed six times more formal impact than informal impact. The results suggest that new types of citation information and informal scholarly indictors could be extracted from the Web for the social sciences. Since these form only a small proportion of the Web citations, however, Web citation counts should first be processed to remove irrelevant citations. This can be a time-consuming process unless automated.  相似文献   

18.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behavior across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analyzed. The effect of journal self-citations over the normalization process is also studied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号