首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Previous research has shown that citation data from different types of Web sources can potentially be used for research evaluation. Here we introduce a new combined Integrated Online Impact (IOI) indicator. For a case study, we selected research articles published in the Journal of the American Society for Information Science & Technology (JASIST) and Scientometrics in 2003. We compared the citation counts from Web of Science (WoS) and Scopus with five online sources of citation data including Google Scholar, Google Books, Google Blogs, PowerPoint presentations and course reading lists. The mean and median IOI was nearly twice as high as both WoS and Scopus, confirming that online citations are sufficiently numerous to be useful for the impact assessment of research. We also found significant correlations between conventional and online impact indicators, confirming that both assess something similar in scholarly communication. Further analysis showed that the overall percentage for unique Google Scholar citations outside the WoS were 73% and 60% for the articles published in JASIST and Scientometrics, respectively. An important conclusion is that in subject areas where wider types of intellectual impact indicators outside the WoS and Scopus databases are needed for research evaluation, IOI can be used to help monitor research performance.  相似文献   

2.
This paper studies the correlations between peer review and citation indicators when evaluating research quality in library and information science (LIS). Forty-two LIS experts provided judgments on a 5-point scale of the quality of research published by 101 scholars; the median rankings resulting from these judgments were then correlated with h-, g- and H-index values computed using three different sources of citation data: Web of Science (WoS), Scopus and Google Scholar (GS). The two variants of the basic h-index correlated more strongly with peer judgment than did the h-index itself; citation data from Scopus was more strongly correlated with the expert judgments than was data from GS, which in turn was more strongly correlated than data from WoS; correlations from a carefully cleaned version of GS data were little different from those obtained using swiftly gathered GS data; the indices from the citation databases resulted in broadly similar rankings of the LIS academics; GS disadvantaged researchers in bibliometrics compared to the other two citation database while WoS disadvantaged researchers in the more technical aspects of information retrieval; and experts from the UK and other European countries rated UK academics with higher scores than did experts from the USA.  相似文献   

3.
Dimensions is a partly free scholarly database launched by Digital Science in January 2018. Dimensions includes journal articles and citation counts, making it a potential new source of impact data. This article explores the value of Dimensions from an impact assessment perspective with an examination of Food Science research 2008–2018 and a random sample of 10,000 Scopus articles from 2012. The results include high correlations between citation counts from Scopus and Dimensions (0.96 by narrow field in 2012) as well as similar average counts. Almost all Scopus articles with DOIs were found in Dimensions (97% in 2012). Thus, the scholarly database component of Dimensions seems to be a plausible alternative to Scopus and the Web of Science for general citation analyses and for citation data in support of some types of research evaluations.  相似文献   

4.
This paper examined the citation impact of Chinese- and English-language articles in Chinese-English bilingual journals indexed by Scopus and Web of Science (WoS). Two findings were obtained from comparative analysis: (1) Chinese-language articles were not biased in citations compared with English-language articles, since they received a large number of citations from Chinese scientists; (2) a Chinese-language community was found in Scopus, in which Chinese-language articles mainly received citations from Chinese-language articles, but it was not found in WoS whose coverage of Chinese-language articles is only one-tenth of Scopus. The findings suggest some implications for academic evaluation of journals including Chinese-language articles in Scopus and WoS.  相似文献   

5.
Despite recent evidence that Microsoft Academic is an extensive source of citation counts for journal articles, it is not known if the same is true for academic books. This paper fills this gap by comparing citations to 16,463 books from 2013 to 2016 in the Book Citation Index (BKCI) against automatically extracted citations from Microsoft Academic and Google Books in 17 fields. About 60% of the BKCI books had records in Microsoft Academic, varying by year and field. Citation counts from Microsoft Academic were 1.5 to 3.6 times higher than from BKCI in nine subject areas across all years for books indexed by both. Microsoft Academic found more citations than BKCI because it indexes more scholarly publications and combines citations to different editions and chapters. In contrast, BKCI only found more citations than Microsoft Academic for books in three fields from 2013-2014. Microsoft Academic also found more citations than Google Books in six fields for all years. Thus, Microsoft Academic may be a useful source for the impact assessment of books when comprehensive coverage is not essential.  相似文献   

6.
Many studies demonstrate differences in the coverage of citing publications in Google Scholar (GS) and Web of Science (WoS). Here, we examine to what extent citation data from the two databases reflect the scholarly impact of women and men differently. Our conjecture is that WoS carries an indirect gender bias in its selection criteria for citation sources that GS avoids due to criteria that are more inclusive. Using a sample of 1250 U.S. researchers in Sociology, Political Science, Economics, Cardiology and Chemistry, we examine gender differences in the average citation coverage of the two databases. We also calculate database-specific h-indices for all authors in the sample. In repeated simulations of hiring scenarios, we use these indices to examine whether women's appointment rates increase if hiring decisions rely on data from GS in lieu of WoS. We find no systematic gender differences in the citation coverage of the two databases. Further, our results indicate marginal to non-existing effects of database selection on women's success-rates in the simulations. In line with the existing literature, we find the citation coverage in WoS to be largest in Cardiology and Chemistry and smallest in Political Science and Sociology. The concordance between author-based h-indices measured by GS and WoS is largest for Chemistry followed by Cardiology, Political Science, Sociology and Economics.  相似文献   

7.
This article presents the results of a comparative study of Web of Science (WoS), Scopus, and Google Scholar (GS) for a set of 15 business and economics journals. Citations from the three sources were analyzed to determine whether one source is better than another, or whether a new database such as Scopus, or a free database such as GS could replace WoS. The authors concluded that scholars might want to use alternative tools collectively to get a more complete picture of the scholarly impact of an article.  相似文献   

8.
The findings of Bornmann, Leydesdorff, and Wang (2013b) revealed that the consideration of journal impact improves the prediction of long-term citation impact. This paper further explores the possibility of improving citation impact measurements on the base of a short citation window by the consideration of journal impact and other variables, such as the number of authors, the number of cited references, and the number of pages. The dataset contains 475,391 journal papers published in 1980 and indexed in Web of Science (WoS, Thomson Reuters), and all annual citation counts (from 1980 to 2010) for these papers. As an indicator of citation impact, we used percentiles of citations calculated using the approach of Hazen (1914). Our results show that citation impact measurement can really be improved: If factors generally influencing citation impact are considered in the statistical analysis, the explained variance in the long-term citation impact can be much increased. However, this increase is only visible when using the years shortly after publication but not when using later years.  相似文献   

9.
Bibliometricians have long recurred to citation counts to measure the impact of publications on the advancement of science. However, since the earliest days of the field, some scholars have questioned whether all citations should be worth the same, and have gone on to weight them by a variety of factors. However sophisticated the operationalization of the measures, the methodologies used in weighting citations still present limits in their underlying assumptions. This work takes an alternative approach to resolving the underlying problem: the proposal is to value citations by the impact of the citing articles, regardless of the length of their reference list. As well as conceptualizing a new indicator of impact, the work illustrates its application to the 2004–2012 Italian scientific production indexed in the WoS. The proposed impact indicator is highly correlated to the traditional citation count, however the shifts observed between the two measures are frequent and the number of outliers not negligible. Moreover, the new indicator shows greater “sensitivity” when used to identify the highly-cited papers.  相似文献   

10.
Identifying the future influential papers among the newly published ones is an important yet challenging issue in bibliometrics. As newly published papers have no or limited citation history, linear extrapolation of their citation counts—which is motivated by the well-known preferential attachment mechanism—is not applicable. We translate the recently introduced notion of discoverers to the citation network setting, and show that there are authors who frequently cite recent papers that become highly-cited in the future; these authors are referred to as discoverers. We develop a method for early identification of highly-cited papers based on the early citations from discoverers. The results show that the identified discoverers have a consistent citing pattern over time, and the early citations from them can be used as a valuable indicator to predict the future citation counts of a paper. The discoverers themselves are potential future outstanding researchers as they receive more citations than average.  相似文献   

11.
Articles are cited for different purposes and differentiating between reasons when counting citations may therefore give finer-grained citation count information. Although identifying and aggregating the individual reasons for each citation may be impractical, recording the number of citations that originate from different article sections might illuminate the general reasons behind a citation count (e.g., 110 citations = 10 Introduction citations + 100 Methods citations). To help investigate whether this could be a practical and universal solution, this article compares 19 million citations with DOIs from six different standard sections in 799,055 PubMed Central open access articles across 21 out of 22 fields. There are apparently non-systematic differences between fields in the most citing sections and the extent to which citations from one section overlap with citations from another, with some degree of overlap in most cases. Thus, at a science-wide level, section headings are partly unreliable indicators of citation context, even if they are more standard within individual fields. They may still be used within fields to help identify individual highly cited articles that have had one type of impact, especially methodological (Methods) or context setting (Introduction), but expert judgement is needed to validate the results.  相似文献   

12.
Examining a comprehensive set of papers (n = 1837) that were accepted for publication by the journal Angewandte Chemie International Edition (one of the prime chemistry journals in the world) or rejected by the journal but then published elsewhere, this study tested the extent to which the use of the freely available database Google Scholar (GS) can be expected to yield valid citation counts in the field of chemistry. Analyses of citations for the set of papers returned by three fee-based databases – Science Citation Index, Scopus, and Chemical Abstracts – were compared to the analysis of citations found using GS data. Whereas the analyses using citations returned by the three fee-based databases show very similar results, the results of the analysis using GS citation data differed greatly from the findings using citations from the fee-based databases. Our study therefore supports, on the one hand, the convergent validity of citation analyses based on data from the fee-based databases and, on the other hand, the lack of convergent validity of the citation analysis based on the GS data.  相似文献   

13.
In citation network analysis, complex behavior is reduced to a simple edge, namely, node A cites node B. The implicit assumption is that A is giving credit to, or acknowledging, B. It is also the case that the contributions of all citations are treated equally, even though some citations appear multiply in a text and others appear only once. In this study, we apply text-mining algorithms to a relatively large dataset (866 information science articles containing 32,496 bibliographic references) to demonstrate the differential contributions made by references. We (1) look at the placement of citations across the different sections of a journal article, and (2) identify highly cited works using two different counting methods (CountOne and CountX). We find that (1) the most highly cited works appear in the Introduction and Literature Review sections of citing papers, and (2) the citation rankings produced by CountOne and CountX differ. That is to say, counting the number of times a bibliographic reference is cited in a paper rather than treating all references the same no matter how many times they are invoked in the citing article reveals the differential contributions made by the cited works to the citing paper.  相似文献   

14.
For comparisons of citation impacts across fields and over time, bibliometricians normalize the observed citation counts with reference to an expected citation value. Percentile-based approaches have been proposed as a non-parametric alternative to parametric central-tendency statistics. Percentiles are based on an ordered set of citation counts in a reference set, whereby the fraction of papers at or below the citation counts of a focal paper is used as an indicator for its relative citation impact in the set. In this study, we pursue two related objectives: (1) although different percentile-based approaches have been developed, an approach is hitherto missing that satisfies a number of criteria such as scaling of the percentile ranks from zero (all other papers perform better) to 100 (all other papers perform worse), and solving the problem with tied citation ranks unambiguously. We introduce a new citation-rank approach having these properties, namely P100; (2) we compare the reliability of P100 empirically with other percentile-based approaches, such as the approaches developed by the SCImago group, the Centre for Science and Technology Studies (CWTS), and Thomson Reuters (InCites), using all papers published in 1980 in Thomson Reuters Web of Science (WoS). How accurately can the different approaches predict the long-term citation impact in 2010 (in year 31) using citation impact measured in previous time windows (years 1–30)? The comparison of the approaches shows that the method used by InCites overestimates citation impact (because of using the highest percentile rank when papers are assigned to more than a single subject category) whereas the SCImago indicator shows higher power in predicting the long-term citation impact on the basis of citation rates in early years. Since the results show a disadvantage in this predictive ability for P100 against the other approaches, there is still room for further improvements.  相似文献   

15.
苏林伟  于霜  许鑫  赵星 《图书情报工作》2015,59(19):100-107
[目的/意义] 鉴于Scopus、Web of Science(WoS)与Google Scholar(GS)3个单源数据库在计量学研究和应用中的局限,发展综合利用多源数据信息的复合型引文分析方法可为计量学分析提供一种补充思路。[方法/过程] 以期刊总被引次数这一底层参量为切入点,用图书情报学国际期刊2009-2014年数据为实证基础,尝试对多源数据复合引文算术均值、几何均值及调和均值方法进行比较讨论。[结果/结论] 结果发现,算术均值、几何均值和调和均值虽有差异,但亦有线性关联,应用中可选其一;累计算术均值、几何均值和调和均值与期刊数之间未证实存在布拉德福定律,但在期刊数量等同的三分区内,累计被引均值满足形如n2:n:1的经验分布;多源数据复合引文与单个数据库引文的期刊排名差异呈现"两端小、中间大"之现象;期刊的算术均值排名与GS排名结果更接近,几何均值排名则与Scopus相似度更高,而调和均值与WoS的排名差异最小。  相似文献   

16.
Dissertations can be the single most important scholarly outputs of junior researchers. Whilst sets of journal articles are often evaluated with the help of citation counts from the Web of Science or Scopus, these do not index dissertations and so their impact is hard to assess. In response, this article introduces a new multistage method to extract Google Scholar citation counts for large collections of dissertations from repositories indexed by Google. The method was used to extract Google Scholar citation counts for 77,884 American doctoral dissertations from 2013 to 2017 via ProQuest, with a precision of over 95%. Some ProQuest dissertations that were dual indexed with other repositories could not be retrieved with ProQuest-specific searches but could be found with Google Scholar searches of the other repositories. The Google Scholar citation counts were then compared with Mendeley reader counts, a known source of scholarly-like impact data. A fifth of the dissertations had at least one citation recorded in Google Scholar and slightly fewer had at least one Mendeley reader. Based on numerical comparisons, the Mendeley reader counts seem to be more useful for impact assessment purposes for dissertations that are less than two years old, whilst Google Scholar citations are more useful for older dissertations, especially in social sciences, arts and humanities. Google Scholar citation counts may reflect a more scholarly type of impact than that of Mendeley reader counts because dissertations attract a substantial minority of their citations from other dissertations. In summary, the new method now makes it possible for research funders, institutions and others to systematically evaluate the impact of dissertations, although additional Google Scholar queries for other online repositories are needed to ensure comprehensive coverage.  相似文献   

17.
In this paper we present a first large-scale analysis of the relationship between Mendeley readership and citation counts with particular documents’ bibliographic characteristics. A data set of 1.3 million publications from different fields published in journals covered by the Web of Science (WoS) has been analyzed. This work reveals that document types that are often excluded from citation analysis due to their lower citation values, like editorial materials, letters, news items, or meeting abstracts, are strongly covered and saved in Mendeley, suggesting that Mendeley readership can reliably inform the analysis of these document types. Findings show that collaborative papers are frequently saved in Mendeley, which is similar to what is observed for citations. The relationship between readership and the length of titles and number of pages, however, is weaker than for the same relationship observed for citations. The analysis of different disciplines also points to different patterns in the relationship between several document characteristics, readership, and citation counts. Overall, results highlight that although disciplinary differences exist, readership counts are related to similar bibliographic characteristics as those related to citation counts, reinforcing the idea that Mendeley readership and citations capture a similar concept of impact, although they cannot be considered as equivalent indicators.  相似文献   

18.
Many journals post accepted articles online before they are formally published in an issue. Early citation impact evidence for these articles could be helpful for timely research evaluation and to identify potentially important articles that quickly attract many citations. This article investigates whether Microsoft Academic can help with this task. For over 65,000 Scopus in-press articles from 2016 and 2017 across 26 fields, Microsoft Academic found 2–5 times as many citations as Scopus, depending on year and field. From manual checks of 1122 Microsoft Academic citations not found in Scopus, Microsoft Academic’s citation indexing was faster but not much wider than Scopus for journals. It achieved this by associating citations to preprints with their subsequent in-press versions and by extracting citations from in-press articles. In some fields its coverage of scholarly digital libraries, such as arXiv.org, was also an advantage. Thus, Microsoft Academic seems to be a more comprehensive automatic source of citation counts for in-press articles than Scopus.  相似文献   

19.
The purpose of this research is to investigate the current state and trend of government website information cited by social science and humanities (SS&H) journal articles in China. The Chinese Social Science Citation Index (CSSCI) was used as the benchmark and the Social Science Citation Index (SSCI) journals as the reference samples. It analyzed 204,019 web citations (N = 5,063,237) found in 925,506 articles that were published in CSSCI journals during the 1998–2009 period. The findings unveil that web citations accounted for only 4.03% of the total number of citations (N = 5,063,237), and that citations of Chinese government websites constituted 6.6% of the total number of web citations (N = 204,019). The study disclosed detailed information regarding citations derived from ministries and commissions directly under the State Council websites (N = 69), government online media (N = 7), government website citation subjects (N = 21), and various types of government website information (N = 5). Although government website information has limited influence on SS&H, their impact is currently growing rapidly. In comparison with international research community, influence of government web information on Chinese social science is higher, while its influence on humanities is lower. Essentially, Chinese scholars put emphasis on citing information from authoritative central government websites or highly visible state-owned media information as supporting evidences in their articles. In general, the citation of information from Chinese government website tends to hot social issues of society. Finally, it is necessary to promote the visibility of local government websites, to develop policies and guidelines to encourage the disclosure and the diversity of data, so that there will be more citation balances between social and technological topics.  相似文献   

20.
This study provides a conceptual overview of the literature dealing with the process of citing documents (focusing on the literature from the recent decade). It presents theories, which have been proposed for explaining the citation process, and studies having empirically analyzed this process. The overview is referred to as conceptual, because it is structured based on core elements in the citation process: the context of the cited document, processes from selection to citation of documents, and the context of the citing document. The core elements are presented in a schematic representation. The overview can be used to find answers on basic questions about the practice of citing documents. Besides understanding of the process of citing, it delivers basic information for the proper application of citations in research evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号