首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
苏芳荔 《图书情报工作》2011,55(10):144-148
以图情类影响力最大的4种期刊在2000-2009年的载文量与被引频次为样本,采用符号检验与相关分析的方法,从合作模式与合作频率两个方面分析科研合作对期刊论文被引频次的影响。研究发现:①合作发表论文的影响力明显高于独立(无合作)发表的论文;②在获得被引频次方面,国际合作并不优于国内合作,高校并不优于研究所;③研究机构的合作次数与被引频次呈正线性相关关系,但机构的合作频率与篇均被引次数没有显著相关。  相似文献   

2.
[目的/意义]科研评价中,短时间引文窗口下的学科标准化指标往往是不可靠的,因为这时论文发表的时间较短,还没有充足的时间获取被引次数.然而,各种标准化方法本身并不能解决这一问题.研究旨在解决这一科研评价中的难题.[方法/过程]研究引入一个权重因素以表示每篇论文标准分的可靠程度,权重由论文在给定的短时间窗口下的被引次数与长...  相似文献   

3.
Main path analysis is a popular method for extracting the backbone of scientific evolution from a (paper) citation network. The first and core step of main path analysis, called search path counting, is to weight citation arcs by the number of scientific influence paths from old to new papers. Search path counting shows high potential in scientific impact evaluation due to its semantic similarity to the meaning of scientific impact indicator, i.e. how many papers are influenced to what extent. In addition, the algorithmic idea of search path counting also resembles many known indirect citation impact indicators. Inspired by the above observations, this paper presents the FSPC (Forward Search Path Count) framework as an alternative scientific impact indicator based on indirect citations. Two critical assumptions are made to ensure the effectiveness of FSPC. First, knowledge decay is introduced to weight scientific influence paths in decreasing order of length. Second, path capping is introduced to mimic human literature search and citing behavior. By experiments on two well-studied datasets against two carefully created gold standard sets of papers, we have demonstrated that FSPC is able to achieve surprisingly good performance in not only recognizing high-impact papers but also identifying undercited papers.  相似文献   

4.
[目的/意义]对科学计量研究中计数方法的相关概念进行界定,构建计数方法分类体系,梳理比较计数方法的特征和差异,分析现存问题并提出未来改进的方向和选择计数方法的建议。[方法/过程] 首先概括计数方法的组成要素和使用流程,从信誉值分配的角度提出计数方法分类的两个要素,将计数方法分为全计数法与分数计数法两大类,并对各方法进行概述;以全计数与分数计数法的等权算法--full counts与fractional counts为例,从论文指标、引文指标、网络指标3个视角,比较计数方法的差异。[结果/结论] 文章对于全计数与分数计数方法的优劣势、计数单元与计数对象的一致性、信誉值分配规则合理性、网络影响力测度4个方面的问题进行了思考,指出在未来上述4个方面进一步研究的方向。  相似文献   

5.
The normalized citation indicator may not be sufficiently reliable when a short citation time window is used, because the citation counts for recently published papers are not as reliable as those for papers published many years ago. In a limited time period, recent publications usually have insufficient time to accumulate citations and the citation counts of these publications are not sufficiently reliable to be used in the citation impact indicators. However, normalization methods themselves cannot solve this problem. To solve this problem, we introduce a weighting factor to the commonly used normalization indicator Category Normalized Citation Impact (CNCI) at the paper level. The weighting factor, which is calculated as the correlation coefficient between citation counts of papers in the given short citation window and those in the fixed long citation window, reflects the degree of reliability of the CNCI value of one paper. To verify the effect of the proposed weighted CNCI indicator, we compared the CNCI score and CNCI ranking of 500 universities before and after introducing the weighting factor. The results showed that although there was a strong positive correlation before and after the introduction of the weighting factor, some universities’ performance and rankings changed dramatically.  相似文献   

6.
The number of clinical citations received from clinical guidelines or clinical trials has been considered as one of the most appropriate indicators for quantifying the clinical impact of biomedical papers. Therefore, the early prediction of clinical citation count of biomedical papers is critical to scientific activities in biomedicine, such as research evaluation, resource allocation, and clinical translation. In this study, we designed a four-layer multilayer perceptron neural network (MPNN) model to predict the clinical citation count of biomedical papers in the future by using 9,822,620 biomedical papers published from 1985 to 2005. We extracted ninety-one paper features from three dimensions as the input of the model, including twenty-one features in the paper dimension, thirty-five in the reference dimension, and thirty-five in the citing paper dimension. In each dimension, the features can be classified into three categories, i.e., the citation-related features, the clinical translation-related features, and the topic-related features. Besides, in the paper dimension, we also considered the features that have previously been demonstrated to be related to the citation counts of research papers. The results showed that the proposed MPNN model outperformed the other five baseline models, and the features in the reference dimension were the most important. In all the three dimensions, the citation-related and topic-related features were more important than the clinical translation-related features for the prediction. It also turned out that the features helpful in predicting the citation count of papers are not important for predicting the clinical citation count of biomedical papers. Furthermore, we explored the MPNN model based on different categories of biomedical papers. The results showed that the clinical translation-related features were more important for the prediction of clinical citation count of basic papers rather than those papers closer to clinical science. This study provided a novel dimension (i.e., the reference dimension) for the research community and could be applied to other related research tasks, such as the research assessment for translational programs. In addition, the findings in this study could be useful for biomedical authors (especially for those in basic science) to get more attention from clinical research.  相似文献   

7.
8.
With the advancement of science and technology, the number of academic papers published each year has increased almost exponentially. While a large number of research papers highlight the prosperity of science and technology, they also give rise to some problems. As we know, academic papers are the most intuitive embodiment of the research results of scholars, which can reflect the level of researchers. It is also the standard for evaluation and decision-making of them, such as promotion and allocation of funds. Therefore, how to measure the quality of an academic paper is very critical. The most common standard for measuring the quality of academic papers is the number of citation counts of them, as this indicator is widely used in the evaluation of scientific publications. It also serves as the basis for many other indicators (such as the h-index). Therefore, it is very important to be able to accurately predict the citation counts of academic papers. To improve the effective of citation counts prediction, we try to solve the citation counts prediction problem from the perspective of information cascade prediction and take advantage of deep learning techniques. Thus, we propose an end-to-end deep learning framework (DeepCCP), consisting of graph structure representation and recurrent neural network modules. DeepCCP directly uses the citation network formed in the early stage of the paper as the input, and outputs the citation counts of the corresponding paper after a period of time. It only exploits the structure and temporal information of the citation network, and does not require other additional information. According to experiments on two real academic citation datasets, DeepCCP is shown superior to the state-of-the-art methods in terms of the accuracy of citation count prediction.  相似文献   

9.
基于收录高水平论文的ESI数据库,对首批“985”高校入选的基础学科进行分析对比,从其入选学科及学科整体在世界范围的学科引用排名情况,以及论文收录、篇均被引、高被引论文等量化数据来分析,且与美国早期8所常春藤高校的相应指标进行对比,客观评价其学科研究特点和学术影响力,并为“世界一流”大学的建设提出建议。   相似文献   

10.
Research evaluation based on bibliometrics is prevalent in modern science. However, the usefulness of citation counts for measuring research impact has been questioned for many years. Empirical studies have demonstrated that the probability of being cited might depend on many factors that are not related to the accepted conventions of scholarly publishing. The current study investigates the relationship between the performance of universities in terms of field-normalized citation impact (NCS) and four factors (FICs) with possible influences on the citation impact of single papers: journal impact factor (JIF), number of pages, number of authors, and number of cited references. The study is based on articles and reviews published by 49 German universities in 2000, 2005 and 2010. Multilevel regression models have been estimated, since multiple levels of data have been analyzed which are on the single paper and university level. The results point to weak relationships between NCSs and number of authors, number of cited references, number of pages, and JIF. Thus, the results demonstrate that there are similar effects of all FICs on NCSs in universities with high or low NCSs. Although other studies revealed that FICs might be effective on the single paper level, the results of this study demonstrate that they are not effective on the aggregated level (i.e., on the institutional NCSs level).  相似文献   

11.
Wenli Gao 《期刊图书馆员》2016,70(1-4):121-127
This article outlines a methodology to generate a list of local core journal titles by doing a citation analysis and details the process for retrieving and downloading data from Scopus. It analyzes correlations among citation count, journal rankings, and journal usage. The results of this study reveal significant correlations between journal rankings and journal usage. No correlation with citation count has been found. Limitations and implications for collection development and outreach are also discussed.  相似文献   

12.
《Journal of Informetrics》2019,13(2):485-499
With the growing number of published scientific papers world-wide, the need to evaluation and quality assessment methods for research papers is increasing. Scientific fields such as scientometrics, informetrics, and bibliometrics establish quantified analysis methods and measurements for evaluating scientific papers. In this area, an important problem is to predict the future influence of a published paper. Particularly, early discrimination between influential papers and insignificant papers may find important applications. In this regard, one of the most important metrics is the number of citations to the paper, since this metric is widely utilized in the evaluation of scientific publications and moreover, it serves as the basis for many other metrics such as h-index. In this paper, we propose a novel method for predicting long-term citations of a paper based on the number of its citations in the first few years after publication. In order to train a citation count prediction model, we employed artificial neural network which is a powerful machine learning tool with recently growing applications in many domains including image and text processing. The empirical experiments show that our proposed method outperforms state-of-the-art methods with respect to the prediction accuracy in both yearly and total prediction of the number of citations.  相似文献   

13.
[目的/意义] 文章的被引频次一直是量化评价一篇论文学术影响力的重要指标。但在不同学科不同年份发表的论文会因该领域研究论文数、引用滞后等因素呈现较大的差异。因此在对比两篇论文时,难以简单依据被引频次的绝对值来评判论文影响力大小。为此,本文设计了一个新的可计算数学模型,使得每篇论文可以有一个标准化的指标,以便对不同学科不同年份发表的论文的学术影响力进行直接比较。[方法/过程] 通过分析2006、2017两年中国科技类学术期刊各学科论文的被引频次分布规律,采用同学科论文被引频次的分布形态最接近对数正态分布的先设条件,提出一种被引频次标准化指数——Paper Citation Standardized Index (简称PCSI,中文"论文引证标准化指数")。最后以中国科协优秀科技期刊论文评选结果为例,将它们与论文所属学科全部论文进行实证对比研究。[结果/结论] 结果证明,PCSI对不同年份、不同学科论文的被引频次进行了标准化,反映了被引频次的线性差距,是一种较为理想的单篇论文学术影响力比较评价工具。  相似文献   

14.
[目的/意义] 文章的被引频次一直是量化评价一篇论文学术影响力的重要指标。但在不同学科不同年份发表的论文会因该领域研究论文数、引用滞后等因素呈现较大的差异。因此在对比两篇论文时,难以简单依据被引频次的绝对值来评判论文影响力大小。为此,本文设计了一个新的可计算数学模型,使得每篇论文可以有一个标准化的指标,以便对不同学科不同年份发表的论文的学术影响力进行直接比较。[方法/过程] 通过分析2006、2017两年中国科技类学术期刊各学科论文的被引频次分布规律,采用同学科论文被引频次的分布形态最接近对数正态分布的先设条件,提出一种被引频次标准化指数——Paper Citation Standardized Index (简称PCSI,中文"论文引证标准化指数")。最后以中国科协优秀科技期刊论文评选结果为例,将它们与论文所属学科全部论文进行实证对比研究。[结果/结论] 结果证明,PCSI对不同年份、不同学科论文的被引频次进行了标准化,反映了被引频次的线性差距,是一种较为理想的单篇论文学术影响力比较评价工具。  相似文献   

15.
[目的/意义] 针对60年间作者引用行为演变的分析,了解引文评价的局限性,促进学术论文评价方法的发展和完善。[方法/过程] 通过对1957-2017年物理学和哲学代表性期刊的280篇论文的3 314条参考文献和5 222次引文进行识别,判断和统计其在不同年代的引用特征,并讨论引用行为的演变趋势对引文评价的可能影响。[结果/结论] 通过调查发现如下结论:一是期刊论文在参考文献载体类型和年代分布上没有明显变化,但在篇均参考文献量、参考文献文内平均被引用次数、论文的引用认同和引用深度等方面存在明显变化趋势;二是引用行为的变化,使得引文分析作为学术论文评价的依据受到质疑。论文篇均参考文献量的增长以及深度引用与负面引用比重的下降,使得引文评价的参考性减弱。  相似文献   

16.
The numerical-algorithmic procedures of fractional counting and field normalization are often mentioned as indispensable requirements for bibliometric analyses. Against the background of the increasing importance of statistics in bibliometrics, a multilevel Poisson regression model (level 1: publication, level 2: author) shows possible ways to consider fractional counting and field normalization in a statistical model (fractional counting I). However, due to the assumption of duplicate publications in the data set, the approach is not quite optimal. Therefore, a more advanced approach, a multilevel multiple membership model, is proposed that no longer provides for duplicates (fractional counting II). It is assumed that the citation impact can essentially be attributed to time-stable dispositions of researchers as authors who contribute with different fractions to the success of a publication’s citation. The two approaches are applied to bibliometric data for 254 scientists working in social science methodology. A major advantage of fractional counting II is that the results no longer depend on the type of fractional counting (e.g., equal weighting). Differences between authors in rankings are reproduced more clearly than on the basis of percentiles. In addition, the strong importance of field normalization is demonstrated; 60% of the citation variance is explained by field normalization.  相似文献   

17.
Greater collaboration generally produces higher category normalised citation impact (CNCI) and more influential science. Citation differences between domestic and international collaborative articles are known, but obscured in analyses of countries’ CNCIs, compromising evaluation insights. Here, we address this problem by deconstructing and distinguishing domestic and international collaboration types to explore differences in article citation rates between collaboration type and countries. Using Web of Science article data covering 2009–2018, we find that individual country citation and CNCI profiles vary significantly between collaboration types (e.g., domestic single institution and international bilateral) and credit counting methods (full and fractional). The ‘boosting’ effect of international collaboration is greatest where total research capacity is smallest, which could mislead interpretation of performance for policy and management purposes. By incorporating collaboration type into the CNCI calculation, we define a new metric labelled Collab-CNCI. This can account for collaboration effects without presuming credit (as fractional counting does). We recommend that analysts should: (1) partition all article datasets so that citation counts can be normalised by collaboration type (Collab-CNCI) to enable improved interpretation for research policy and management; and (2) consider filtering out smaller entities from multinational and multi-institutional analyses where their inclusion is likely to obscure interpretation.  相似文献   

18.
In citation network analysis, complex behavior is reduced to a simple edge, namely, node A cites node B. The implicit assumption is that A is giving credit to, or acknowledging, B. It is also the case that the contributions of all citations are treated equally, even though some citations appear multiply in a text and others appear only once. In this study, we apply text-mining algorithms to a relatively large dataset (866 information science articles containing 32,496 bibliographic references) to demonstrate the differential contributions made by references. We (1) look at the placement of citations across the different sections of a journal article, and (2) identify highly cited works using two different counting methods (CountOne and CountX). We find that (1) the most highly cited works appear in the Introduction and Literature Review sections of citing papers, and (2) the citation rankings produced by CountOne and CountX differ. That is to say, counting the number of times a bibliographic reference is cited in a paper rather than treating all references the same no matter how many times they are invoked in the citing article reveals the differential contributions made by the cited works to the citing paper.  相似文献   

19.
20.
This paper presents an empirical analysis of two different methodologies for calculating national citation indicators: whole counts and fractionalised counts. The aim of our study is to investigate the effect on relative citation indicators when citations to documents are fractionalised among the authoring countries. We have performed two analyses: a time series analysis of one country and a cross-sectional analysis of 23 countries. The results show that all countries’ relative citation indicators are lower when fractionalised counting is used. Further, the difference between whole and fractionalised counts is generally greatest for the countries with the highest proportion of internationally co-authored articles. In our view there are strong arguments in favour of using fractionalised counts to calculate relative citation indexes at the national level, rather than using whole counts, which is the most common practice today.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号