首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10568篇
  免费   132篇
  国内免费   18篇
教育   7262篇
科学研究   1081篇
各国文化   95篇
体育   1181篇
综合类   6篇
文化理论   132篇
信息传播   961篇
  2022年   71篇
  2021年   99篇
  2020年   161篇
  2019年   276篇
  2018年   359篇
  2017年   358篇
  2016年   336篇
  2015年   207篇
  2014年   281篇
  2013年   1954篇
  2012年   235篇
  2011年   232篇
  2010年   211篇
  2009年   185篇
  2008年   199篇
  2007年   215篇
  2006年   181篇
  2005年   160篇
  2004年   155篇
  2003年   183篇
  2002年   154篇
  2001年   174篇
  2000年   165篇
  1999年   163篇
  1998年   83篇
  1997年   86篇
  1996年   117篇
  1995年   86篇
  1994年   108篇
  1993年   94篇
  1992年   163篇
  1991年   152篇
  1990年   144篇
  1989年   143篇
  1988年   118篇
  1987年   121篇
  1986年   141篇
  1985年   145篇
  1984年   119篇
  1983年   131篇
  1982年   115篇
  1981年   100篇
  1980年   84篇
  1979年   160篇
  1978年   110篇
  1977年   94篇
  1976年   99篇
  1975年   70篇
  1974年   90篇
  1972年   72篇
排序方式: 共有10000条查询结果,搜索用时 359 毫秒
61.
ABSTRACT

Latent Dirichlet allocation (LDA) topic models are increasingly being used in communication research. Yet, questions regarding reliability and validity of the approach have received little attention thus far. In applying LDA to textual data, researchers need to tackle at least four major challenges that affect these criteria: (a) appropriate pre-processing of the text collection; (b) adequate selection of model parameters, including the number of topics to be generated; (c) evaluation of the model’s reliability; and (d) the process of validly interpreting the resulting topics. We review the research literature dealing with these questions and propose a methodology that approaches these challenges. Our overall goal is to make LDA topic modeling more accessible to communication researchers and to ensure compliance with disciplinary standards. Consequently, we develop a brief hands-on user guide for applying LDA topic modeling. We demonstrate the value of our approach with empirical data from an ongoing research project.  相似文献   
62.
ABSTRACT

Employing a number of different standalone programs is a prevalent approach among communication scholars who use computational methods to analyze media content. For instance, a researcher might use a specific program or a paid service to scrape some content from the Web, then use another program to process the resulting data, and finally conduct statistical analysis or produce some visualizations in yet another program. This makes it hard to build reproducible workflows, and even harder to build on the work of earlier studies. To improve this situation, we propose and discuss four criteria that a framework for automated content analysis should fulfill: scalability, free and open source, adaptability, and accessibility via multiple interfaces. We also describe how to put these considerations into practice, discuss their feasibility, and point toward future developments.  相似文献   
63.
The purpose of this study is to find a theoretically grounded, practically applicable and useful granularity level of an algorithmically constructed publication-level classification of research publications (ACPLC). The level addressed is the level of research topics. The methodology we propose uses synthesis papers and their reference articles to construct a baseline classification. A dataset of about 31 million publications, and their mutual citations relations, is used to obtain several ACPLCs of different granularity. Each ACPLC is compared to the baseline classification and the best performing ACPLC is identified. The results of two case studies show that the topics of the cases are closely associated with different classes of the identified ACPLC, and that these classes tend to treat only one topic. Further, the class size variation is moderate, and only a small proportion of the publications belong to very small classes. For these reasons, we conclude that the proposed methodology is suitable to determine the topic granularity level of an ACPLC and that the ACPLC identified by this methodology is useful for bibliometric analyses.  相似文献   
64.
Placing Facebook     
Facebook is challenging professional journalism. These challenges were evident in three incidents from 2016: the allegation that Facebook privileged progressive-leaning news on its trending feature; Facebook’s removal of the Pulitzer Prize-winning “Napalm Girl” photo from the pages of prominent users; and the proliferation of “fake news” during the US presidential election. Using theoretical concepts from the field of boundary work, this paper examines how The Guardian, The New York Times, Columbia Journalism Review and Poynter editorialized Facebook’s role in these three incidents to discursively construct the boundary between the value of professional journalism to democracy and Facebook’s ascendant role in facilitating essential democratic functions. Findings reveal that these publications attempted to define Facebook as a news organization (i.e., include it within the boundaries of journalism) so that they could then criticize the company for not following duties traditionally incumbent upon news organizations (i.e., place it outside the boundaries of journalism). This paper advances scholarship that focuses on both inward and outward conceptions of boundary work, further explores the complex challenge of defining who a journalist is in the face of rapidly changing technological norms, and advances scholarship in the field of media ethics that positions ethical analysis at the institutional level.  相似文献   
65.
66.
67.
68.
With the rise of microfluidics for the past decade, there has come an ever more pressing need for a low-cost and rapid prototyping technology, especially for research and education purposes. In this article, we report a rapid prototyping process of chromed masks for various microfluidic applications. The process takes place out of a clean room, uses a commercially available video-projector, and can be completed in less than half an hour. We quantify the ranges of fields of view and of resolutions accessible through this video-projection system and report the fabrication of critical microfluidic components (junctions, straight channels, and curved channels). To exemplify the process, three common devices are produced using this method: a droplet generation device, a gradient generation device, and a neuro-engineering oriented device. The neuro-engineering oriented device is a compartmentalized microfluidic chip, and therefore, required the production and the precise alignment of two different masks.  相似文献   
69.
70.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号