全文获取类型
收费全文 | 12541篇 |
免费 | 160篇 |
国内免费 | 19篇 |
专业分类
教育 | 8681篇 |
科学研究 | 1229篇 |
各国文化 | 142篇 |
体育 | 1398篇 |
综合类 | 7篇 |
文化理论 | 175篇 |
信息传播 | 1088篇 |
出版年
2022年 | 96篇 |
2021年 | 160篇 |
2020年 | 275篇 |
2019年 | 427篇 |
2018年 | 543篇 |
2017年 | 554篇 |
2016年 | 473篇 |
2015年 | 316篇 |
2014年 | 376篇 |
2013年 | 2505篇 |
2012年 | 329篇 |
2011年 | 294篇 |
2010年 | 250篇 |
2009年 | 216篇 |
2008年 | 227篇 |
2007年 | 249篇 |
2006年 | 208篇 |
2005年 | 187篇 |
2004年 | 173篇 |
2003年 | 194篇 |
2002年 | 166篇 |
2001年 | 179篇 |
2000年 | 173篇 |
1999年 | 175篇 |
1998年 | 94篇 |
1997年 | 86篇 |
1996年 | 130篇 |
1995年 | 90篇 |
1994年 | 103篇 |
1993年 | 100篇 |
1992年 | 166篇 |
1991年 | 157篇 |
1990年 | 153篇 |
1989年 | 133篇 |
1988年 | 110篇 |
1987年 | 114篇 |
1986年 | 136篇 |
1985年 | 130篇 |
1984年 | 106篇 |
1983年 | 128篇 |
1982年 | 105篇 |
1981年 | 101篇 |
1980年 | 84篇 |
1979年 | 159篇 |
1978年 | 110篇 |
1977年 | 101篇 |
1976年 | 102篇 |
1975年 | 73篇 |
1974年 | 87篇 |
1972年 | 68篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
71.
Debasis?GangulyEmail authorView authors OrcID profile Gareth?J.?F.?Jones Aarón?Ramírez-de-la-Cruz Gabriela?Ramírez-de-la-Rosa Esaú?Villatoro-Tello 《Information Retrieval》2018,21(1):1-23
Automatic detection of source code plagiarism is an important research field for both the commercial software industry and within the research community. Existing methods of plagiarism detection primarily involve exhaustive pairwise document comparison, which does not scale well for large software collections. To achieve scalability, we approach the problem from an information retrieval (IR) perspective. We retrieve a ranked list of candidate documents in response to a pseudo-query representation constructed from each source code document in the collection. The challenge in source code document retrieval is that the standard bag-of-words (BoW) representation model for such documents is likely to result in many false positives being retrieved, because of the use of identical programming language specific constructs and keywords. To address this problem, we make use of an abstract syntax tree (AST) representation of the source code documents. While the IR approach is efficient, it is essentially unsupervised in nature. To further improve its effectiveness, we apply a supervised classifier (pre-trained with features extracted from sample plagiarized source code pairs) on the top ranked retrieved documents. We report experiments on the SOCO-2014 dataset comprising 12K Java source files with almost 1M lines of code. Our experiments confirm that the AST based approach produces significantly better retrieval effectiveness than a standard BoW representation, i.e., the AST based approach is able to identify a higher number of plagiarized source code documents at top ranks in response to a query source code document. The supervised classifier, trained on features extracted from sample plagiarized source code pairs, is shown to effectively filter and thus further improve the ranked list of retrieved candidate plagiarized documents. 相似文献
72.
73.
Alberto González 《Communication Studies》2018,69(4):362-365
This reflection essay describes the Central States Region as an area rich in intercultural communication. González describes Mexican American migrant farmworker organizing as an intercultural activity since the union activists attempted to influence both Mexican heritage and European heritage audiences. González also describes the many interculturalists working in the Midwest who influenced his early research. 相似文献
74.
Daniel Maier A. Waldherr P. Miltner G. Wiedemann A. Niekler A. Keinert 《Communication methods and measures》2018,12(2-3):93-118
ABSTRACTLatent Dirichlet allocation (LDA) topic models are increasingly being used in communication research. Yet, questions regarding reliability and validity of the approach have received little attention thus far. In applying LDA to textual data, researchers need to tackle at least four major challenges that affect these criteria: (a) appropriate pre-processing of the text collection; (b) adequate selection of model parameters, including the number of topics to be generated; (c) evaluation of the model’s reliability; and (d) the process of validly interpreting the resulting topics. We review the research literature dealing with these questions and propose a methodology that approaches these challenges. Our overall goal is to make LDA topic modeling more accessible to communication researchers and to ensure compliance with disciplinary standards. Consequently, we develop a brief hands-on user guide for applying LDA topic modeling. We demonstrate the value of our approach with empirical data from an ongoing research project. 相似文献
75.
ABSTRACTEmploying a number of different standalone programs is a prevalent approach among communication scholars who use computational methods to analyze media content. For instance, a researcher might use a specific program or a paid service to scrape some content from the Web, then use another program to process the resulting data, and finally conduct statistical analysis or produce some visualizations in yet another program. This makes it hard to build reproducible workflows, and even harder to build on the work of earlier studies. To improve this situation, we propose and discuss four criteria that a framework for automated content analysis should fulfill: scalability, free and open source, adaptability, and accessibility via multiple interfaces. We also describe how to put these considerations into practice, discuss their feasibility, and point toward future developments. 相似文献
76.
Flávia Gomes-Franco e Silva Juliana Colussi Paula Melani Rocha 《Journal of Radio & Audio Media》2018,25(1):77-91
This study was conducted in an environment of widespread use of social media and mobile applications in the mass media. The general goal of the study was to analyze the use of WhatsApp in cybermedia, specifically in radio. A case study was proposed to examine the use of WhatsApp on the program Las mañanas de RNE, broadcast by Spanish National Radio. It was found that the public was very accepting of the program’s initiative to solicit WhatsApp voice messages, beginning in November 2015. The case study used audio files of a direct broadcast that included specific times for audience participation. The use of WhatsApp was accepted by the audience, in addition to the use of the conventional telephone, as a tool well-suited to listener participation in radio programming. Finally, the study highlights the importance of interactive, participatory spaces in broadcasts through the creation of synergies with new forms of online participation. 相似文献
77.
We have studied the efficiency of research in the EU by a percentile-based citation approach that analyzes the distribution of country papers among the world papers. Going up in the citation scale, the frequency of papers from efficient countries increases while the frequency from inefficient countries decreases. In the percentile-based approach, this trend, which is uniform at any citation level, is measured by the ep index that equals the Ptop 1%/Ptop 10% ratio. By using the ep index we demonstrate that EU research on fast-evolving technological topics is less efficient than the world average and that the EU is far from being able to compete with the most advanced countries. The ep index also shows that the USA is well ahead of the EU in both fast- and slow-evolving technologies, which suggests that the advantage of the USA over the EU in innovation is due to low research efficiency in the EU. In accord with some previous studies, our results show that the European Commission’s ongoing claims about the excellence of EU research are based on a wrong diagnosis. The EU must focus its research policy on the improvement of its inefficient research. Otherwise, the future of Europeans is at risk. 相似文献
78.
Facebook is challenging professional journalism. These challenges were evident in three incidents from 2016: the allegation that Facebook privileged progressive-leaning news on its trending feature; Facebook’s removal of the Pulitzer Prize-winning “Napalm Girl” photo from the pages of prominent users; and the proliferation of “fake news” during the US presidential election. Using theoretical concepts from the field of boundary work, this paper examines how The Guardian, The New York Times, Columbia Journalism Review and Poynter editorialized Facebook’s role in these three incidents to discursively construct the boundary between the value of professional journalism to democracy and Facebook’s ascendant role in facilitating essential democratic functions. Findings reveal that these publications attempted to define Facebook as a news organization (i.e., include it within the boundaries of journalism) so that they could then criticize the company for not following duties traditionally incumbent upon news organizations (i.e., place it outside the boundaries of journalism). This paper advances scholarship that focuses on both inward and outward conceptions of boundary work, further explores the complex challenge of defining who a journalist is in the face of rapidly changing technological norms, and advances scholarship in the field of media ethics that positions ethical analysis at the institutional level. 相似文献
79.
80.
Ramiro Durán Martínez Gloria Gutiérrez Fernando Beltrán Llavador Fernando Martínez Abad 《Journal of Intercultural Communication Research》2016,45(4):338-354
Pre- and post-questionnaires answered by UK Nottingham Trent University (NTU) and Spain’s University of Salamanca (USAL) students on their placement abroad support a comparative analysis of its impact on their intercultural communicative competence, comprising the awareness, attitude, skills and knowledge dimensions. NTU students started their placements with better results in them all. Yet, while both groups completed their stay with a similarly increased awareness, the knowledge and skills of the USAL group matched the results of NTU students and, whereas the scores of the attitude dimension towards the host country of NTU students did not increase significantly, those of USAL students decreased. 相似文献