首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
This paper analyzes several well-known bibliometric indices using an axiomatic approach. We concentrate on indices aiming at capturing the global impact of a scientific output and do not investigate indices aiming at capturing an average impact. Hence, the indices that we study are designed to evaluate authors or groups of authors but not journals. The bibliometric indices that are studied include classic ones such as the number of highly cited papers as well as more recent ones such as the h-index and the g-index. We give conditions that characterize these indices, up to the multiplication by a positive constant. We also study the bibliometric rankings that are induced by these indices. Hence, we provide a general framework for the comparison of bibliometric rankings and indices.  相似文献   

2.
This paper shows how bibliometric models can be used to assist peers in selecting candidates for academic openings.Several studies have demonstrated that a relationship exists between results from peer-review evaluations and results obtained with certain bibliometric indicators. However, very little has been done to analyse the predictive power of models based on bibliometric indicators. Indicators with high predictive power will be seen as good instruments to support peer evaluations. The goal of this study is to assess the predictive power of a model based on bibliometric indicators for the results of academic openings at the level of Associado and Catedrático at Portuguese universities. Our results suggest that the model can predict the results of peer-review at this level with a reasonable degree of accuracy. This predictive power is better when only the scientific performance is assessed by peers.  相似文献   

3.
以《中国学术期刊综合引证报告》(CAJCCR)为依据,以我国21种林业科学类核心期刊为研究对象,用文献计量学方法对学报学术影响力的各项计量指标进行对比分析与研究。从而了解21种林业科学类核心期刊载文所反映的学科地位、学术水平及期刊质量。追踪21种林业科学类核心期刊的总被引频次、影响因子和h指数等指标,客观、综合、全面评价21种林业科学类核心期刊的整体情况。  相似文献   

4.
In this paper we attempt to assess the impact of journals in the field of forestry, in terms of bibliometric data, by providing an evaluation of forestry journals based on data envelopment analysis (DEA). In addition, based on the results of the conducted analysis, we provide suggestions for improving the impact of the journals in terms of widely accepted measures of journal citation impact, such as the journal impact factor (IF) and the journal h-index. More specifically, by modifying certain inputs associated with the productivity of forestry journals, we have illustrated how this method could be utilized to raise their efficiency, which in terms of research impact can then be translated into an increase of their bibliometric indices, such as the h-index, IF or eigenfactor score.  相似文献   

5.
In this paper we deal with the problem of aggregating numeric sequences of arbitrary length that represent e.g. citation records of scientists. Impact functions are the aggregation operators that express as a single number not only the quality of individual publications, but also their author's productivity.We examine some fundamental properties of these aggregation tools. It turns out that each impact function which always gives indisputable valuations must necessarily be trivial. Moreover, it is shown that for any set of citation records in which none is dominated by the other, we may construct an impact function that gives any a priori-established authors’ ordering. Theoretically then, there is considerable room for manipulation in the hands of decision makers.We also discuss the differences between the impact function-based and the multicriteria decision making-based approach to scientific quality management, and study how the introduction of new properties of impact functions affects the assessment process. We argue that simple mathematical tools like the h- or g-index (as well as other bibliometric impact indices) may not necessarily be a good choice when it comes to assess scientific achievements.  相似文献   

6.
Is more always better? We address this question in the context of bibliometric indices that aim to assess the scientific impact of individual researchers by counting their number of highly cited publications. We propose a simple model in which the number of citations of a publication depends not only on the scientific impact of the publication but also on other ‘random’ factors. Our model indicates that more need not always be better. It turns out that the most influential researchers may have a systematically lower performance, in terms of highly cited publications, than some of their less influential colleagues. The model also suggests an improved way of counting highly cited publications.  相似文献   

7.
指出文献计量作为一种有效的评价手段,在生物医药领域,主要应用于学术期刊评价和科研绩效评价;传统的文献计量评价方法存在一些固有局限性,为此人们已作出许多创新和改进。分析讨论评价学术期刊的新模型和指标--渐进曲线模型和特征因子以及评价科研绩效的两种方法创新--多指标综合分析和基于社会网络的分析,并论述文献计量与经济社会因素的结合使用。从这些新型方法和指标的出现和应用可以看出,文献计量评价的发展呈现出借助数学模型和计算机手段,由单指标向多指标转换,结合复杂的社会网络特征和经济社会因素进行分析的大趋势。  相似文献   

8.
Machine learning (ML) methods have recently been applied in diverse fields of study. ML methods provide new toolkits and opportunities for social sciences, but they have also raised concerns with their black-box nature, irreproducibility, and emphasis on prediction rather than explanation. Against this backdrop, we study the bibliometric impact of leveraging ML methods in economics using publications indexed in Microsoft Academic Graph. We use our four-dimensional bibliometric framework by which we gage citation intensity, speed, breadth, and disruption to compare two groups of publications in economics (2001–2020)—those using ML methods and others not. We find that economics papers applying ML methods started to have advantages in citation counts and speed after 2010. Our analysis also shows that they received attention from more diverse research communities and had more disruptive citations over the past two decades. Then, we demonstrate that economics papers using ML methods obtained more disruptive citations within economics than outside. These findings suggest bibliometric advantages for applying ML methods in economics, especially in the recent decade, but we also discuss cautions and potential opportunities missed.  相似文献   

9.
Equalizing bias (EqB) is a systematic inaccuracy which arises when authorship credit is divided equally among coauthors who have not contributed equally. As the number of coauthors increases, the diminishing amount of credit allocated to each additional coauthor is increasingly composed of equalizing bias such that when the total number of coauthors exceeds 12, the credit score of most coauthors is composed mostly of EqB. In general, EqB reverses the byline hierarchy and skews bibliometric assessments by underestimating the contribution of primary authors, i.e. those adversely affected by negative EqB, and overestimating the contribution of secondary authors, those benefitting from positive EqB. The positive and negative effects of EqB are balanced and sum to zero, but are not symmetrical. The lack of symmetry exacerbates the relative effects of EqB, and explains why primary authors are increasingly outnumbered by secondary authors as the number of coauthors increases. Specifically, for a paper with 50 coauthors, the benefit of positive EqB goes to 39 secondary authors while the burden of negative EqB befalls 11 primary authors. Relative to harmonic estimates of their actual contribution, the EqB of the 50 coauthors ranged from <−90% to >350%. Senior authorship, when it occurs, is conventionally indicated by a corresponding last author and recognized as being on a par with a first author. If senior authorship is not recognized, then the credit lost by an unrecognized senior author is distributed among the other coauthors as part of their EqB. The powerful distortional effect of EqB is compounded in bibliometric indices and performance rankings derived from biased equal credit. Equalizing bias must therefore be corrected at the source by ensuring accurate accreditation of all coauthors prior to the calculation of aggregate publication metrics.  相似文献   

10.
11.
Newly introduced bibliometric indices may be biased by the preference of scientists for bibliometric indices, in which their own research receives a high score. To test such a hypothesis, the publication and citation records of nine scientists who recently proposed new bibliometric indices were analyzed in terms of standard indicators, their own indicators, and indicators recently proposed by other scientists. The result of the test was negative, that is, newly introduced bibliometric indices did not favor their authors.  相似文献   

12.

Objective:

The objective of this study was to analyze bibliometric data from ISI, National Institutes of Health (NIH)–funding data, and faculty size information for Association of American Medical Colleges (AAMC) member schools during 1997 to 2007 to assess research productivity and impact.

Methods:

This study gathered and synthesized 10 metrics for almost all AAMC medical schools (n = 123): (1) total number of published articles per medical school, (2) total number of citations to published articles per medical school, (3) average number of citations per article, (4) institutional impact indices, (5) institutional percentages of articles with zero citations, (6) annual average number of faculty per medical school, (7) total amount of NIH funding per medical school, (8) average amount of NIH grant money awarded per faculty member, (9) average number of articles per faculty member, and (10) average number of citations per faculty member. Using principal components analysis, the author calculated the relationships between measures, if they existed.

Results:

Principal components analysis revealed 3 major clusters of variables that accounted for 91% of the total variance: (1) institutional research productivity, (2) research influence or impact, and (3) individual faculty research productivity. Depending on the variables in each cluster, medical school research may be appropriately evaluated in a more nuanced way. Significant correlations exist between extracted factors, indicating an interrelatedness of all variables. Total NIH funding may relate more strongly to the quality of the research than the quantity of the research. The elimination of medical schools with outliers in 1 or more indicators (n = 20) altered the analysis considerably.

Conclusions:

Though popular, ordinal rankings cannot adequately describe the multidimensional nature of a medical school''s research productivity and impact. This study provides statistics that can be used in conjunction with other sound methodologies to provide a more authentic view of a medical school''s research. The large variance of the collected data suggests that refining bibliometric data by discipline, peer groups, or journal information may provide a more precise assessment.

Highlights

  • Principal components analysis discovered three clusters of variables: (1) institutional research productivity, (2) research influence or impact, and (3) individual faculty research productivity.
  • The associations between size-independent measures (e.g., average number of citations/article) were more significant than associations between size-independent bibliometric measures and size-dependent (e.g., number of faculty) bibliometric measures and vice versa, except in the case of total National Institutes of Health (NIH) funding.
  • The factor coefficients, or loadings, for total NIH funding may associate more with the quality of research rather than the quantity of research.
  • The removal of twenty outliers, fourteen highly productive or influential medical schools and six medical schools with relatively low research profiles, changed the results of the analysis significantly.
  • This study''s broad institutional bibliometric data sets cannot be extrapolated to specific departments at the studied medical schools.

Implications

  • Librarians, administrators, and faculty should use several methodologies in tandem with bibliometric data when evaluating institutions'' research impact and productivity.
  • Health sciences librarians should not make use of university rankings materials lacking strong methodological foundations.
  • This study''s bibliometric data may provide a starting point or point of comparison for future assessments.
  相似文献   

13.
For the purposes of classification it is common to represent a document as a bag of words. Such a representation consists of the individual terms making up the document together with the number of times each term appears in the document. All classification methods make use of the terms. It is common to also make use of the local term frequencies at the price of some added complication in the model. Examples are the naïve Bayes multinomial model (MM), the Dirichlet compound multinomial model (DCM) and the exponential-family approximation of the DCM (EDCM), as well as support vector machines (SVM). Although it is usually claimed that incorporating local word frequency in a document improves text classification performance, we here test whether such claims are true or not. In this paper we show experimentally that simplified forms of the MM, EDCM, and SVM models which ignore the frequency of each word in a document perform about at the same level as MM, DCM, EDCM and SVM models which incorporate local term frequency. We also present a new form of the naïve Bayes multivariate Bernoulli model (MBM) which is able to make use of local term frequency and show again that it offers no significant advantage over the plain MBM. We conclude that word burstiness is so strong that additional occurrences of a word essentially add no useful information to a classifier.  相似文献   

14.
Evaluative bibliometrics is concerned with comparing research units by using statistical procedures. According to Williams (2012) an empirical study should be concerned with the substantive and practical significance of the findings as well as the sign and statistical significance of effects. In this study we will explain what adjusted predictions and marginal effects are and how useful they are for institutional evaluative bibliometrics. As an illustration, we will calculate a regression model using publications (and citation data) produced by four universities in German-speaking countries from 1980 to 2010. We will show how these predictions and effects can be estimated and plotted, and how this makes it far easier to get a practical feel for the substantive meaning of results in evaluative bibliometric studies. An added benefit of this approach is that it makes it far easier to explain results obtained via sophisticated statistical techniques to a broader and sometimes non-technical audience. We will focus particularly on Average Adjusted Predictions (AAPs), Average Marginal Effects (AMEs), Adjusted Predictions at Representative Values (APRVs) and Marginal Effects at Representative Values (MERVs).  相似文献   

15.
介绍可被引文献和“非可被引文献(non-citable document,NCD)的概念.对NCD的引证特征进行统计分析,证实了NCD是可被引用的,甚至还可能有极高的被引频次.通过文献计量学分析,探讨NCD对期刊影响因子的贡献度.  相似文献   

16.
《Journal of Informetrics》2019,13(2):515-539
Counting of number of papers, of citations and the h-index are the simplest bibliometric indices of the impact of research. We discuss some improvements. First, we replace citations with individual citations, fractionally shared among co-authors, to take into account that different papers and different fields have largely different average number of co-authors and of references. Next, we improve on citation counting applying the PageRank algorithm to citations among papers. Being time-ordered, this reduces to a weighted counting of citation descendants that we call PaperRank. We compute a related AuthorRank applying the PageRank algorithm to citations among authors. These metrics quantify the impact of an author or paper taking into account the impact of those authors that cite it. Finally, we show how self- and circular-citations can be eliminated by defining a closed market of Citation-coins. We apply these metrics to the InSpire database that covers fundamental physics, presenting results for papers, authors, journals, institutes, towns, countries for all-time and in recent time periods.  相似文献   

17.
肖宏  伍军红  孙隽 《编辑学报》2017,29(4):340-344
在学术期刊的计量评价指标体系中,影响因子和总被引频次是2项最为重要的指标,占据了较高的权重;但是,期刊办刊历史长短、发表论文多少、出版周期长短、学科人群多少等都会影响总被引频次的大小.尤其是一些发表大量低水平论文的期刊,依靠论文数量众多,依然可以获得较高的总被引频次;但其影响因子却很低,论文质量很差.如何客观甄别这类论文数量巨大而质量效益不高的期刊?本文提出一个全新的衡量期刊量效关系的指标——期刊量效指数(journal mass index,JMI).“量”指期刊的发文量,“效”则引入期刊影响因子.JMI定义为某刊影响因子与该刊影响因子对应的发文量的比值,意义是平均每篇文献对该刊影响因子的贡献值.JMI能客观反映同一个学科中量大质低的期刊的“臃肿程度”.在《中国学术期刊影响因子年报(2016版)》中,JMI被应用于修正期刊影响力指数(CI)排序,使CI排序更准确地反映学术期刊的学科影响力排名.实践证明,JMI是一个对学术期刊量效关系进行客观评判的有用的计量指标.  相似文献   

18.
Hirsch [Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572] has proposed the h index as a single-number criterion to evaluate the scientific output of a researcher. We investigated the convergent validity of decisions for awarding long-term fellowships to post-doctoral researchers as practiced by the Boehringer Ingelheim Fonds (B.I.F.) by using the h index. Our study examined 414 B.I.F. applicants (64 approved and 350 rejected) with a total of 1586 papers. The results of our study show that the applicants’ h indices correlate substantially with standard bibliometric indicators. Even though the h indices of approved B.I.F. applicants on average (arithmetic mean and median) are higher than those of rejected applicants (and with this, fundamentally confirm the validity of the funding decisions), the distributions of the h indices show in part overlaps that we categorized as type I error (falsely drawn approval) or type II error (falsely drawn rejection). Approximately, one-third of the decisions to award a fellowship to an applicant show a type I error, and about one-third of the decisions not to award a fellowship to an applicant show a type II error. Our analyses of possible reasons for these errors show that the applicant's field of study but not personal ties between the B.I.F. applicant and the B.I.F. can increase or decrease the risks for type I and type II errors.  相似文献   

19.
In this work we address the comprehensive Scimago Institutions Ranking 2012, proposing a data visualization of the listed bibliometric indicators for the 509 Higher Education Institutions among the 600 largest research institutions ranked according to their outputs. We focus on research impact, internationalization and leadership indicators, which became important benchmarks in a worldwide discussion about research quality and impact policies for universities. Our data visualization reveals a qualitative difference between the behavior of Northern American and Western European Higher Education Institutions concerning International collaboration levels. Chinese universities show still a systematic low international collaboration levels which are positively linked to the low research impact. The data suggests that research impact can be related directly to internationalization only to rather low values for both indicators. Above world average, other determinants may become relevant in fostering further impact. The leadership indicator provides further insights to the collaborative environment of universities in different geographical regions, as well as the optimized collaboration portfolio for enhancing research impact.  相似文献   

20.
Coauthorship is increasing across all areas of scholarship. Despite this trend, dissertations as sole-authored monographs are still revered as the cornerstone of doctoral education. As students learn the norms and communicative behaviors of their field during their doctoral education, do they also learn collaborative behaviors? This study investigated this issue through triangulation of 30 interviews, 215 questionnaires, and bibliometric analyses of 97 CVs in the field of library and information science (LIS). The findings demonstrate that collaboration occurs in about half of advisee/advisor relationships and is primarily understood as research dissemination outside the dissertation. Respondents reported that the dissertation was not and should not be considered a collaborative product. The discussion also includes a commentary about grant funding and the implications for this on models of academic scholarship and research production.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号