首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
In this paper, we propose a novel framework for a scholarly journal – a token‐curated registry (TCR). This model originates in the field of blockchain and cryptoeconomics and is essentially a decentralized system where tokens (digital currency) are used to incentivize quality curation of information. TCR is an automated way to create lists of any kind where decisions (whether to include N or not) are made through voting that brings benefit or loss to voters. In an academic journal, TCR could act as a tool to introduce community‐driven decisions on papers to be published, thus encouraging more active participation of authors/reviewers in editorial policy and elaborating the idea of a journal as a club. TCR could also provide a novel solution to the problems of editorial bias and the lack of rewards/incentives for reviewers. In the paper, we discuss core principles of TCR, its technological and cultural foundations, and finally analyse the risks and challenges it could bring to scholarly publishing.  相似文献   

2.
The journal impact factor is widely used as a performance indicator for single authors (despite its unsuitably in this respect). Hence, authors are increasingly exercised if there is any sign that impact factors are being manipulated. Editors who ask authors to cite relevant papers from their own journal are accused of acting unethically. This is surprising because, besides publishers, authors are the primary beneficiaries of an increased impact factor of the journal in which they publish, and because the citation process is biased anyway. There is growing evidence that quality and relevance are not always the reasons for choosing references. Authors' biases and personal environments as well as strategic considerations are major factors. As long as an editor does not force authors to cite irrelevant papers from their own journal, I consider it as a matter of caretaking for the journal and its authors if an editor brings recent papers to the authors' attention. It would be unfair to authors and disloyal to the publisher if an editor did not try to increase the impact of his/her own journal.  相似文献   

3.
4.
Medical publishing uses the skills of people from a wide range of backgrounds. In this study we set out to examine their attitudes and assess the degree of homogeneity. We gathered questionnaire responses from selected editors and medical reviewers and found that, on the whole, there was a homogeneous culture, though there were some significant differences. This has important implications for managers and trainers.  相似文献   

5.
The digitization of journal content and its availability online has revolutionized journal publishing in recent years, resulting in both opportunities and challenges for traditional journal publishers. The explosion of data and the emergence of new players such as Google, new business models like Open Access, and new content consumers and producers, for example, China are significantly changing the face of journal publishing. It is not yet clear what the impact of these changes will be but by continuing to collaborate with our existing stakeholders and building partnerships with these newcomers, as well as by maintaining and promoting the quality of our content, we can ensure our future growth and success.  相似文献   

6.
McChesney, R. W. (1993). Telecommunications, mass media, and democracy: The battle for the control of U.S. broadcasting, 1928‐1935. New York: Oxford University Press. 393 pages.  相似文献   

7.
Research articles seem to have direct value for students in some subject areas, even though scholars may be their target audience. If this can be proven to be true, then subject areas with this type of educational impact could justify claims for enhanced funding. To seek evidence of disciplinary differences in the direct educational uptake of journal articles, but ignoring books, conference papers, and other scholarly outputs, this paper assesses the total number and proportions of student readers of academic articles in Mendeley across 12 different subjects. The results suggest that whilst few students read mathematics research articles, in other areas, the number of student readers is broadly proportional to the number of research readers. Although the differences in the average numbers of undergraduate readers of articles varies by up to 50 times between subjects, this could be explained by the differing levels of uptake of Mendeley rather than the differing educational value of disciplinary research. Overall, then, the results do not support the claim that journal articles in some areas have substantially more educational value than average for academia, compared with their research value.  相似文献   

8.
9.
10.
The history of the establishment and development of the RAS VINITI AJ in mathematics from 1953 to 2007 is considered. Various classification systems are analyzed that were used in different periods of the creation of the AJ, as well as the subject indexes to the AJ. The structure of mathematical AJ problems is described. A statistical analysis of the cumulative flow of publications reflected in the cumulated volumes and special issues of AJ “Mathematics” from 1953 to 2007 has been performed.  相似文献   

11.
The publication indicator of the Finnish research funding system is based on a manual ranking of scholarly publication channels. These ranks, which represent the evaluated quality of the channels, are continuously kept up to date and thoroughly reevaluated every four years by groups of nominated scholars belonging to different disciplinary panels. This expert-based decision-making process is informed by available citation-based metrics and other relevant metadata characterizing the publication channels. The purpose of this paper is to introduce various approaches that can explain the basis and evolution of the quality of publication channels, i.e., ranks. This is important for the academic community, whose research work is being governed using the system. Data-based models that, with sufficient accuracy, explain the level of or changes in ranks provide assistance to the panels in their multi-objective decision making, thus suggesting and supporting the need to use more cost-effective, automated ranking mechanisms. The analysis relies on novel advances in machine learning systems for classification and predictive analysis, with special emphasis on local and global feature importance techniques.  相似文献   

12.
Arguably the most striking recent departure in online publishing has been the succession of initiatives designed to provide free or reduced-rate journal access to the developing world. This article examines the motives behind some of these campaigns and probes the difficulties associated with supplying scientific information equitably, productively, and to an appropriate readership in developing or transitional countries. It considers the strengths and weaknesses of the main solutions currently on offer, while advocating a more unified approach based on the three C's of co-ordination, comprehensiveness, and clarity.  相似文献   

13.
14.
15.
It is well-known that the distribution of citations to articles in a journal is skewed. We ask whether journal rankings based on the impact factor are robust with respect to this fact. We exclude the most cited paper, the top 5 and 10 cited papers for 100 economics journals and recalculate the impact factor. Afterwards we compare the resulting rankings with the original ones from 2012. Our results show that the rankings are relatively robust. This holds both for the 2-year and the 5-year impact factor.  相似文献   

16.
This study assesses whether eleven factors associate with higher impact research: individual, institutional and international collaboration; journal and reference impacts; abstract readability; reference and keyword totals; paper, abstract and title lengths. Authors may have some control over these factors and hence this information may help them to conduct and publish higher impact research. These factors have been previously researched but with partially conflicting findings. A simultaneous assessment of these eleven factors for Biology and Biochemistry, Chemistry and Social Sciences used a single negative binomial-logit hurdle model estimating the percentage change in the mean citation counts per unit of increase or decrease in the predictor variables. The journal Impact Factor was found to significantly associate with increased citations in all three areas. The impact and the number of cited references and their average citation impact also significantly associate with higher article citation impact. Individual and international teamwork give a citation advantage in Biology and Biochemistry and Chemistry but inter-institutional teamwork is not important in any of the three subject areas. Abstract readability is also not significant or of no practical significance. Among the article size features, abstract length significantly associates with increased citations but the number of keywords, title length and paper length are insignificant or of no practical significance. In summary, at least some aspects of collaboration, journal and document properties significantly associate with higher citations. The results provide new and particularly strong statistical evidence that the authors should consider publishing in high impact journals, ensure that they do not omit relevant references, engage in the widest possible team working, when appropriate, and write extensive abstracts. A new finding is that whilst is seems to be useful to collaborate and to collaborate internationally, there seems to be no particular need to collaborate with other institutions within the same country.  相似文献   

17.
The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers. The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations. Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF. To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low. However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.  相似文献   

18.
Prior research has suggested that providing free and discounted access to the scientific literature to researchers in low‐income countries increases article production and citation. Using traditional bibliometric indicators for institutions in sub‐Saharan Africa, we analyze whether institutional access to TEEAL (a digital collection of journal articles in agriculture and allied subjects) increases: (i) article production; (ii) reference length; and (iii) number of citations to journals included in the TEEAL collection. Our analysis is based on nearly 20,000 articles – containing half a million references – published between 1988 and 2009 at 70 institutions in 11 African countries. We report that access to TEEAL does not appear to result in higher article production, although it does lead to longer reference lists (an additional 2.6 references per paper) and a greater frequency of citations to TEEAL journals (an additional 0.4 references per paper), compared to non‐subscribing institutions. We discuss how traditional bibliometric indicators may not provide a full picture of the effectiveness of free and discounted literature programs.  相似文献   

19.
20.
The Center for Science and Technology Studies at Leiden University advocates the use of specific normalizations for assessing research performance with reference to a world average. The Journal Citation Score (JCS) and Field Citation Score (FCS) are averaged for the research group or individual researcher under study, and then these values are used as denominators of the (mean) Citations per publication (CPP). Thus, this normalization is based on dividing two averages. This procedure only generates a legitimate indicator in the case of underlying normal distributions. Given the skewed distributions under study, one should average the observed versus expected values which are to be divided first for each publication. We show the effects of the Leiden normalization for a recent evaluation where we happened to have access to the underlying data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号