首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14909篇
  免费   38篇
教育   11270篇
科学研究   1500篇
各国文化   56篇
体育   662篇
综合类   2篇
文化理论   425篇
信息传播   1032篇
  2022年   15篇
  2021年   36篇
  2020年   69篇
  2019年   109篇
  2018年   2254篇
  2017年   2169篇
  2016年   1642篇
  2015年   167篇
  2014年   221篇
  2013年   913篇
  2012年   299篇
  2011年   770篇
  2010年   896篇
  2009年   497篇
  2008年   710篇
  2007年   1207篇
  2006年   157篇
  2005年   465篇
  2004年   513篇
  2003年   421篇
  2002年   181篇
  2001年   52篇
  2000年   74篇
  1999年   51篇
  1998年   55篇
  1997年   60篇
  1996年   39篇
  1995年   33篇
  1994年   31篇
  1993年   38篇
  1992年   30篇
  1991年   42篇
  1990年   40篇
  1989年   37篇
  1988年   40篇
  1987年   38篇
  1986年   30篇
  1985年   25篇
  1984年   34篇
  1983年   34篇
  1982年   26篇
  1981年   18篇
  1980年   23篇
  1979年   35篇
  1978年   28篇
  1977年   24篇
  1976年   26篇
  1974年   19篇
  1973年   20篇
  1971年   15篇
排序方式: 共有10000条查询结果,搜索用时 389 毫秒
991.
The critical task of predicting clicks on search advertisements is typically addressed by learning from historical click data. When enough history is observed for a given query-ad pair, future clicks can be accurately modeled. However, based on the empirical distribution of queries, sufficient historical information is unavailable for many query-ad pairs. The sparsity of data for new and rare queries makes it difficult to accurately estimate clicks for a significant portion of typical search engine traffic. In this paper we provide analysis to motivate modeling approaches that can reduce the sparsity of the large space of user search queries. We then propose methods to improve click and relevance models for sponsored search by mining click behavior for partial user queries. We aggregate click history for individual query words, as well as for phrases extracted with a CRF model. The new models show significant improvement in clicks and revenue compared to state-of-the-art baselines trained on several months of query logs. Results are reported on live traffic of a commercial search engine, in addition to results from offline evaluation.  相似文献   
992.
Empirical modeling of the score distributions associated with retrieved documents is an essential task for many retrieval applications. In this work, we propose modeling the relevant documents’ scores by a mixture of Gaussians and the non-relevant scores by a Gamma distribution. Applying Variational Bayes we automatically trade-off the goodness-of-fit with the complexity of the model. We test our model on traditional retrieval functions and actual search engines submitted to TREC. We demonstrate the utility of our model in inferring precision-recall curves. In all experiments our model outperforms the dominant exponential-Gaussian model.  相似文献   
993.
We first present in this paper an analytical view of heuristic retrieval constraints which yields simple tests to determine whether a retrieval function satisfies the constraints or not. We then review empirical findings on word frequency distributions and the central role played by burstiness in this context. This leads us to propose a formal definition of burstiness which can be used to characterize probability distributions with respect to this phenomenon. We then introduce the family of information-based IR models which naturally captures heuristic retrieval constraints when the underlying probability distribution is bursty and propose a new IR model within this family, based on the log-logistic distribution. The experiments we conduct on several collections illustrate the good behavior of the log-logistic IR model: It significantly outperforms the Jelinek-Mercer and Dirichlet prior language models on most collections we have used, with both short and long queries and for both the MAP and the precision at 10 documents. It also compares favorably to BM25 and has similar performance to classical DFR models such as InL2 and PL2.  相似文献   
994.
To evaluate Information Retrieval Systems on their effectiveness, evaluation programs such as TREC offer a rigorous methodology as well as benchmark collections. Whatever the evaluation collection used, effectiveness is generally considered globally, averaging the results over a set of information needs. As a result, the variability of system performance is hidden as the similarities and differences from one system to another are averaged. Moreover, the topics on which a given system succeeds or fails are left unknown. In this paper we propose an approach based on data analysis methods (correspondence analysis and clustering) to discover correlations between systems and to find trends in topic/system correlations. We show that it is possible to cluster topics and systems according to system performance on these topics, some system clusters being better on some topics. Finally, we propose a new method to consider complementary systems as based on their performances which can be applied for example in the case of repeated queries. We consider the system profile based on the similarity of the set of TREC topics on which systems achieve similar levels of performance. We show that this method is effective when using the TREC ad hoc collection.  相似文献   
995.
We study the problem of web search result diversification in the case where intent based relevance scores are available. A diversified search result will hopefully satisfy the information need of user-L.s who may have different intents. In this context, we first analyze the properties of an intent-based metric, ERR-IA, to measure relevance and diversity altogether. We argue that this is a better metric than some previously proposed intent aware metrics and show that it has a better correlation with abandonment rate. We then propose an algorithm to rerank web search results based on optimizing an objective function corresponding to this metric and evaluate it on shopping related queries.  相似文献   
996.
As new media technologies and platforms emerge and take hold in our society, traditional publishers are wondering: What’s in this new content climate for me? The simple answer is: a lot. The digital world, mobile content delivery mechanisms and the public’s increasing comfort—even preference for—a media menu from which they can pick and choose what they want and how they want to receive it, brings exciting and potentially lucrative opportunities. For publishers who understand how to leverage their brand and create authentic, identifiable value in the eyes of the customer, risk can be reduced and new revenue streams built. Here are four best practices to position your publishing company for growth.  相似文献   
997.
Updated from a presentation given at Biblionext.it in Rome in April 2011, this article will highlight The New York Public Library’s success with e-books and other forms of popular e-content and our efforts to stay one step ahead of the consumer shift from print reading to e-reading. Consumer e-reading is dominated by Amazon.com in the US, followed by one of the largest chain bookstores, BarnesandNoble.com. The availability of digital versions of very popular titles, coupled with the explosion of e-readers, tablets, and smartphones that are priced competitively and fairly easy to use, are helping move a lot of Americans into the e-book world. Last month, Amazon.com announced they sold more e-books than physical books for the first time ever. Print books are not going away, but our experience is that it is clear e-books are no longer just an extra format to offer, they are integral to our future.  相似文献   
998.
We argue that some algorithms are value-laden, and that two or more persons who accept different value-judgments may have a rational reason to design such algorithms differently. We exemplify our claim by discussing a set of algorithms used in medical image analysis: In these algorithms it is often necessary to set certain thresholds for whether e.g. a cell should count as diseased or not, and the chosen threshold will partly depend on the software designer’s preference between avoiding false positives and false negatives. This preference ultimately depends on a number of value-judgments. In the last section of the paper we discuss some general principles for dealing with ethical issues in algorithm-design.  相似文献   
999.
In this paper, I examine the ethics of e-trust and e-trustworthiness in the context of health care, looking at direct computer-patient interfaces (DCPIs), information systems that provide medical information, diagnosis, advice, consenting and/or treatment directly to patients without clinicians as intermediaries. Designers, manufacturers and deployers of such systems have an ethical obligation to provide evidence of their trustworthiness to users. My argument for this claim is based on evidentialism about trust and trustworthiness: the idea that trust should be based on sound evidence of trustworthiness. Evidence of trustworthiness is a broader notion than one might suppose, including not just information about the risks and performance of the system, but also interactional and context-based information. I suggest some sources of evidence in this broader sense that make it plausible that designers, manufacturers and deployers of DCPIs can provide evidence to users that is cognitively simple, easy to communicate, yet rationally connected with actual trustworthiness.  相似文献   
1000.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号