首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10875篇
  免费   135篇
  国内免费   20篇
教育   7493篇
科学研究   1104篇
各国文化   96篇
体育   1196篇
综合类   19篇
文化理论   133篇
信息传播   989篇
  2022年   85篇
  2021年   107篇
  2020年   166篇
  2019年   282篇
  2018年   367篇
  2017年   369篇
  2016年   348篇
  2015年   205篇
  2014年   296篇
  2013年   1967篇
  2012年   243篇
  2011年   231篇
  2010年   214篇
  2009年   190篇
  2008年   198篇
  2007年   221篇
  2006年   188篇
  2005年   168篇
  2004年   169篇
  2003年   200篇
  2002年   174篇
  2001年   187篇
  2000年   178篇
  1999年   165篇
  1998年   84篇
  1997年   87篇
  1996年   117篇
  1995年   88篇
  1994年   109篇
  1993年   95篇
  1992年   162篇
  1991年   153篇
  1990年   144篇
  1989年   144篇
  1988年   119篇
  1987年   121篇
  1986年   141篇
  1985年   145篇
  1984年   119篇
  1983年   131篇
  1982年   117篇
  1981年   107篇
  1980年   97篇
  1979年   171篇
  1978年   117篇
  1977年   109篇
  1976年   105篇
  1975年   81篇
  1974年   93篇
  1972年   74篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
61.
ABSTRACT

Latent Dirichlet allocation (LDA) topic models are increasingly being used in communication research. Yet, questions regarding reliability and validity of the approach have received little attention thus far. In applying LDA to textual data, researchers need to tackle at least four major challenges that affect these criteria: (a) appropriate pre-processing of the text collection; (b) adequate selection of model parameters, including the number of topics to be generated; (c) evaluation of the model’s reliability; and (d) the process of validly interpreting the resulting topics. We review the research literature dealing with these questions and propose a methodology that approaches these challenges. Our overall goal is to make LDA topic modeling more accessible to communication researchers and to ensure compliance with disciplinary standards. Consequently, we develop a brief hands-on user guide for applying LDA topic modeling. We demonstrate the value of our approach with empirical data from an ongoing research project.  相似文献   
62.
ABSTRACT

Employing a number of different standalone programs is a prevalent approach among communication scholars who use computational methods to analyze media content. For instance, a researcher might use a specific program or a paid service to scrape some content from the Web, then use another program to process the resulting data, and finally conduct statistical analysis or produce some visualizations in yet another program. This makes it hard to build reproducible workflows, and even harder to build on the work of earlier studies. To improve this situation, we propose and discuss four criteria that a framework for automated content analysis should fulfill: scalability, free and open source, adaptability, and accessibility via multiple interfaces. We also describe how to put these considerations into practice, discuss their feasibility, and point toward future developments.  相似文献   
63.
The purpose of this study is to find a theoretically grounded, practically applicable and useful granularity level of an algorithmically constructed publication-level classification of research publications (ACPLC). The level addressed is the level of research topics. The methodology we propose uses synthesis papers and their reference articles to construct a baseline classification. A dataset of about 31 million publications, and their mutual citations relations, is used to obtain several ACPLCs of different granularity. Each ACPLC is compared to the baseline classification and the best performing ACPLC is identified. The results of two case studies show that the topics of the cases are closely associated with different classes of the identified ACPLC, and that these classes tend to treat only one topic. Further, the class size variation is moderate, and only a small proportion of the publications belong to very small classes. For these reasons, we conclude that the proposed methodology is suitable to determine the topic granularity level of an ACPLC and that the ACPLC identified by this methodology is useful for bibliometric analyses.  相似文献   
64.
Placing Facebook     
Facebook is challenging professional journalism. These challenges were evident in three incidents from 2016: the allegation that Facebook privileged progressive-leaning news on its trending feature; Facebook’s removal of the Pulitzer Prize-winning “Napalm Girl” photo from the pages of prominent users; and the proliferation of “fake news” during the US presidential election. Using theoretical concepts from the field of boundary work, this paper examines how The Guardian, The New York Times, Columbia Journalism Review and Poynter editorialized Facebook’s role in these three incidents to discursively construct the boundary between the value of professional journalism to democracy and Facebook’s ascendant role in facilitating essential democratic functions. Findings reveal that these publications attempted to define Facebook as a news organization (i.e., include it within the boundaries of journalism) so that they could then criticize the company for not following duties traditionally incumbent upon news organizations (i.e., place it outside the boundaries of journalism). This paper advances scholarship that focuses on both inward and outward conceptions of boundary work, further explores the complex challenge of defining who a journalist is in the face of rapidly changing technological norms, and advances scholarship in the field of media ethics that positions ethical analysis at the institutional level.  相似文献   
65.
66.
67.
68.
With the rise of microfluidics for the past decade, there has come an ever more pressing need for a low-cost and rapid prototyping technology, especially for research and education purposes. In this article, we report a rapid prototyping process of chromed masks for various microfluidic applications. The process takes place out of a clean room, uses a commercially available video-projector, and can be completed in less than half an hour. We quantify the ranges of fields of view and of resolutions accessible through this video-projection system and report the fabrication of critical microfluidic components (junctions, straight channels, and curved channels). To exemplify the process, three common devices are produced using this method: a droplet generation device, a gradient generation device, and a neuro-engineering oriented device. The neuro-engineering oriented device is a compartmentalized microfluidic chip, and therefore, required the production and the precise alignment of two different masks.  相似文献   
69.
70.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号