全文获取类型
收费全文 | 23244篇 |
免费 | 329篇 |
国内免费 | 14篇 |
专业分类
教育 | 16519篇 |
科学研究 | 1912篇 |
各国文化 | 250篇 |
体育 | 2292篇 |
综合类 | 7篇 |
文化理论 | 212篇 |
信息传播 | 2395篇 |
出版年
2021年 | 232篇 |
2020年 | 359篇 |
2019年 | 538篇 |
2018年 | 728篇 |
2017年 | 668篇 |
2016年 | 664篇 |
2015年 | 402篇 |
2014年 | 533篇 |
2013年 | 4243篇 |
2012年 | 484篇 |
2011年 | 537篇 |
2010年 | 403篇 |
2009年 | 434篇 |
2008年 | 475篇 |
2007年 | 441篇 |
2006年 | 414篇 |
2005年 | 358篇 |
2004年 | 389篇 |
2003年 | 304篇 |
2002年 | 323篇 |
2001年 | 458篇 |
2000年 | 508篇 |
1999年 | 434篇 |
1998年 | 255篇 |
1997年 | 248篇 |
1996年 | 306篇 |
1995年 | 244篇 |
1994年 | 257篇 |
1993年 | 220篇 |
1992年 | 349篇 |
1991年 | 367篇 |
1990年 | 339篇 |
1989年 | 364篇 |
1988年 | 336篇 |
1987年 | 325篇 |
1986年 | 325篇 |
1985年 | 370篇 |
1984年 | 292篇 |
1983年 | 303篇 |
1982年 | 239篇 |
1981年 | 246篇 |
1980年 | 259篇 |
1979年 | 343篇 |
1978年 | 232篇 |
1977年 | 211篇 |
1976年 | 185篇 |
1975年 | 164篇 |
1974年 | 167篇 |
1973年 | 164篇 |
1971年 | 150篇 |
排序方式: 共有10000条查询结果,搜索用时 46 毫秒
971.
The Undergraduate Research Student Self-Assessment (URSSA): Validation for Use in Program Evaluation
This article examines the validity of the Undergraduate Research Student Self-Assessment (URSSA), a survey used to evaluate undergraduate research (UR) programs. The underlying structure of the survey was assessed with confirmatory factor analysis; also examined were correlations between different average scores, score reliability, and matches between numerical and textual item responses. The study found that four components of the survey represent separate but related constructs for cognitive skills and affective learning gains derived from the UR experience. Average scores from item blocks formed reliable but moderate to highly correlated composite measures. Additionally, some questions about student learning gains (meant to assess individual learning) correlated to ratings of satisfaction with external aspects of the research experience. The pattern of correlation among individual items suggests that items asking students to rate external aspects of their environment were more like satisfaction ratings than items that directly ask about student skills attainment. Finally, survey items asking about student aspirations to attend graduate school in science reflected inflated estimates of the proportions of students who had actually decided on graduate education after their UR experiences. Recommendations for revisions to the survey include clarified item wording and increasing discrimination between item blocks through reorganization.Undergraduate research (UR) experiences have long been an important component of science education at universities and colleges but have received greater attention in recent years, as they have been identified as important ways to strengthen preparation for advanced study and work in the science fields, especially among students from underrepresented minority groups (Tsui, 2007 ; Kuh, 2008 ). UR internships provide students with the opportunity to conduct authentic research in laboratories with scientist mentors, as students help design projects, gather and analyze data, and write up and present findings (Laursen et al., 2010 ). The promised benefits of UR experiences include both increased skills and greater familiarity with how science is practiced (Russell et al., 2007 ). While students learn the basics of scientific methods and laboratory skills, they are also exposed to the culture and norms of science (Carlone and Johnson, 2007 ; Hunter et al., 2007 ; Lopatto, 2010 ). Students learn about the day-to-day world of practicing science and are introduced to how scientists design studies, collect and analyze data, and communicate their research. After participating in UR, students may make more informed decisions about their future, and some may be more likely to decide to pursue graduate education in science, technology, engineering, and mathematics (STEM) disciplines (Bauer and Bennett, 2003 ; Russell et al., 2007 ; Eagan et al. 2013 ).While UR experiences potentially have many benefits for undergraduate students, assessing these benefits is challenging (Laursen, 2015 ). Large-scale research-based evaluation of the effects of UR is limited by a range of methodological problems (Eagan et al., 2013 ). True experimental studies are almost impossible to implement, since random assignment of students into UR programs is both logistically and ethically impractical, while many simple comparisons between UR and non-UR groups of students suffer from noncomparable groups and limited generalizability (Maton and Hrabowski, 2004 ). Survey studies often rely on poorly developed measures and use nonrepresentative samples, and large-scale survey research usually requires complex statistical models to control for student self-selection into UR programs (Eagan et al., 2013 ). For smaller-scale program evaluation, evaluators also encounter a number of measurement problems. Because of the wide range of disciplines, research topics, and methods, common standardized tests assessing laboratory skills and understandings across these disciplines are difficult to find. While faculty at individual sites may directly assess products, presentations, and behavior using authentic assessments such as portfolios, rubrics, and performance assessments, these assessments can be time-consuming and not easily comparable with similar efforts at other laboratories (Stokking et al., 2004 ; Kuh et al., 2014 ). Additionally, the affective outcomes of UR are not readily tapped by direct academic assessment, as many of the benefits found for students in UR, such as motivation, enculturation, and self-efficacy, are not measured by tests or other assessments (Carlone and Johnson, 2007 ). Other instruments for assessing UR outcomes, such as Lopatto’s SURE (Lopatto, 2010 ), focus on these affective outcomes rather than direct assessments of skills and cognitive gains.The size of most UR programs also makes assessment difficult. Research Experiences for Undergraduates (REUs), one mechanism by which UR programs may be organized within an institution, are funded by the National Science Foundation (NSF), but unlike many other educational programs at NSF (e.g., TUES) that require fully funded evaluations with multiple sources of evidence (Frechtling, 2010 ), REUs are generally so small that they cannot typically support this type of evaluation unless multiple programs pool their resources to provide adequate assessment. Informal UR experiences, offered to students by individual faculty within their own laboratories, are often more common but are typically not coordinated across departments or institutions or accountable to a central office or agency for assessment. Partly toward this end, the Undergraduate Research Student Self-Assessment (URSSA) was developed as a common assessment instrument that can be compared across multiple UR sites within or across institutions. It is meant to be used as one source of assessment information about UR sites and their students.The current research examines the validity of the URSSA in the context of its use as a self-report survey for UR programs and laboratories. Because the survey has been taken by more than 3400 students, we can test some aspects of how the survey is structured and how it functions. Assessing the validity of the URSSA for its intended use is a process of testing hypotheses about how well the survey represents its intended content. This ongoing process (Messick, 1993 ; Kane, 2001 ) involves gathering evidence from a range of sources to learn whether validity claims are supported by evidence and whether the survey results can be used confidently in specific contexts. For the URSSA, our method of inquiry focuses on how the survey is used to assess consortia of REU sites. In this context, survey results are used for quality assurance and comparisons of average ratings over years and as general indicators of program success in encouraging students to pursue graduate science education and scientific careers. Our research questions focus on the meaning and reliability of “core indicators” used to track self-reported learning gains in four areas and the ability of numerical items to capture student aspirations for future plans to attend graduate school in the sciences. 相似文献
972.
Testing within the science classroom is commonly used for both formative and summative assessment purposes to let the student and the instructor gauge progress toward learning goals. Research within cognitive science suggests, however, that testing can also be a learning event. We present summaries of studies that suggest that repeated retrieval can enhance long-term learning in a laboratory setting; various testing formats can promote learning; feedback enhances the benefits of testing; testing can potentiate further study; and benefits of testing are not limited to rote memory. Most of these studies were performed in a laboratory environment, so we also present summaries of experiments suggesting that the benefits of testing can extend to the classroom. Finally, we suggest opportunities that these observations raise for the classroom and for further research.Almost all science classes incorporate testing. Tests are most commonly used as summative assessment tools meant to gauge whether students have achieved the learning objectives of the course. They are sometimes also used as formative assessment tools—often in the form of low-stakes weekly or daily quizzes—to give students and faculty members a sense of students’ progression toward those learning objectives. Occasionally, tests are also used as diagnostic tools, to determine students’ preexisting conceptions or skills relevant to an upcoming subject. Rarely, however, do we think of tests as learning tools. We may acknowledge that testing promotes student learning, but we often attribute this effect to the studying students do to prepare for the test. And yet, one of the most consistent findings in cognitive psychology is that testing leads to increased retention more than studying alone does (Roediger and Butler, 2011 ; Roediger and Pyc, 2012 ). This effect can be enhanced when students receive feedback for failed tests and can be observed for both short-term and long-term retention. There is some evidence that testing not only improves student memory of the tested information but also ability to remember related information. Finally, testing appears to potentiate further study, allowing students to gain more from study periods that follow a test. Given the potential power of testing as a tool to promote learning, we should consider how to incorporate tests into our courses not only to gauge students’ learning, but also to promote that learning (Klionsky, 2008 ).We provide six observations about the effects of testing from the cognitive psychology literature, summarizing key studies that led to these conclusions (see Study Research question(s) Conclusion Length of delay before final test Study participants Repeated retrieval enhances long-term retention in a laboratory setting “Test-enhanced learning: taking memory tests improves long-term retention” (Roediger and Karpicke, 2006a) Is a testing effect observed in educationally relevant conditions? Is the benefit of testing greater than the benefit of restudy? Do multiple tests produce a greater effect than a single test? Testing improved retention significantly more than restudy in delayed tests. Multiple tests provided greater benefit than a single test. Experiment 1: 2 d; 1 wk Experiment 2: 1 wk Undergraduates ages 18–24, Washington University “Retrieval practice with short-answer, multiple-choice, and hybrid tests” (Smith and Karpicke, 2014) What effect does the type of question presented in retrieval practice have on long-term retention? Retrieval practice with multiple-choice, free-response, and hybrid formats improved students’ performance on a final, delayed test taken 1 wk later when compared with a no-retrieval control. The effect was observed for both questions that required only recall and those that required inference. Hybrid questions provided an advantage when the final test had a short-answer format. 1 wk Undergraduates, Purdue University “Retrieval practice produces more learning that elaborative studying with concept mapping” (Karpicke and Blunt, 2011) What is the effect of retrieval practice on learning relative to elaborative study using a concept map? Students in the retrieval-practice condition had greater gains in meaningful learning compared with those who used elaborative concept mapping as a learning tool. 1 wk Undergraduates Various testing formats can enhance learning “Retrieval practice with short-answer, multiple-choice, and hybrid tests” (Smith and Karpicke, 2014) See above. See above. See above. See above. “Test format and corrective feedback modify the effect of testing on long-term retention” (Kang et al., 2007) What effect does the type of question used for retrieval practice have on retention? Does feedback have an effect on retention for different types of questions? When no feedback was given, the difference in long-term retention between short-answer and multiple-choice questions was insignificant. When feedback was provided, short-answer questions were slightly more beneficial. 3 d Undergraduates, Washington University psychology subjects’ pool “The persisting benefits of using multiple-choice tests as learning events” (Little and Bjork, 2012) What effect does question format have on retention of information previously tested and related information not included in retrieval practice? Both cued-recall and multiple-choice questions improved recall compared with the no-test control. However, multiple-choice questions improved recall more than cued-recall questions for information not included in the retrieval practice, both after a 5-min and a 48-h delay. 48 h Undergraduates, University of California, Los Angeles Feedback enhances benefits of testing “Feedback enhances positive effects and reduces the negative effects of multiple-choice testing” (Butler and Roediger, 2008) What effect does feedback on multiple-choice tests have on long-term retention of information? Feedback improved retention on a final cued-recall test. Delayed feedback resulted in better final performance than immediate feedback, though both showed benefits compared with no feedback. The final test occurred 1 wk after the initial test. 1 wk Undergraduate psychology students, Washington University “Correcting a metacognitive error: feedback increases retention of low-confidence responses” (Butler et al., 2008) What role does feedback play in retrieval practice? Can it correct metacognitive errors as well as memory errors? Both initially correct and incorrect answers were benefited by feedback, but low-confidence answers were most benefited by feedback. 5 min Undergraduate psychology students, Washington University Learning is not limited to rote memory “Retrieval practice produces more learning than elaborative study with concept mapping” (Karpicke and Blunt, 2011) What is the effect of retrieval practice on learning relative to elaborative study using a concept map? Does retrieval practice improve students’ ability to perform higher-order cognitive activities (i.e., building a concept map) as well as simple recall tasks? Compared with elaborative study using concept mapping, retrieval practice improved students’ performance both on final tests that required short answers and final tests that required concept map production. See also earlier entry for this study. 1 wk Undergraduates “Retrieval practice with short-answer, multiple-choice, and hybrid tests” (Smith and Karpicke, 2014) See above. See above. See above. See above. “Repeated testing produces superior transfer of learning relative to repeated studying” (Butler, 2010) Does test-enhanced learning promote transfer of facts and concepts from one domain to another? Testing improved retention and increased transfer of information from one domain to another through test questions that required factual or conceptual recall and inferential questions that required transfer. 1 wk Undergraduate psychology students, Washington University Testing potentiates further study “Pretesting with multiple-choice questions facilitates learning” (Little and Bjork, 2011) Does pretesting using multiple-choice questions improve performance on a later test? Is an effect observed only for pretested information or also for related, previously untested information? A multiple-choice pretest improved performance on a final test, both for information that was included on the pretest and related information. 1 wk Undergraduates, University of California, Los Angeles “The interim test effect: testing prior material can facilitate the learning of new material” (Wissman et al., 2011) Does an interim test over previously learned material improve retention of subsequently learned material? Interim testing improves recall on a final test for information taught before and after the interim test. No delay Undergraduates, Kent State University The benefits of testing appear to extend to the classroom “The exam-a-day procedure improves performance in psychology classes” (Leeming, 2002) What effect does a daily exam have on retention at the end of the semester? Students who took a daily exam in an undergraduate psychology class scored higher on a retention test at the end of the course and had higher average grades than students who only took unit tests. One semester Undergraduates enrolled in Summer term of Introductory Psychology, University of Memphis “Repeated testing improves long-term retention relative to repeated study: a randomized controlled trial” (Larsen et al., 2009) Does repeated testing improve long-term retention in a real learning environment? In a study with medical residents, repeated testing with feedback improved retention more than repeated study for a final recall test 6 mo later. 6 mo Residents from Pediatrics and Emergency Medicine programs, Washington University “Retrieving essential material at the end of lectures improves performance on statistics exams” (Lyle and Crawford, 2011) What effect does daily recall practice using the PUREMEM method have on course exam scores? In an undergraduate psychology course, students using the PUREMEM method had higher exams scores than students taught with traditional lectures, assessed by four noncumulative exams spaced evenly throughout the semester. ∼3.5 wk Undergraduates enrolled in either of two consecutive years of Statistics for Psychology, University of Louisville “Using quizzes to enhance summative-assessment performance in a web-based class: an experimental study” (McDaniel et al., 2012) What effects do online testing resources have on retention of information in an online undergraduate neuroscience course? Both multiple-choice and short-answer quiz questions improved retention and improved scores on the final exam for questions identical to those on the weekly quizzes and those that were related but not identical. 15 wk Undergraduates enrolled in Web-based brain and behavior course “Increasing student success using online quizzing in introductory (majors) biology” (Orr and Foster, 2013) What effect do required pre-exam quizzes have on final exam scores for students in an introductory (major) biology course? Students were required to complete 10 pre-exam quizzes throughout the semester. The scores of students who completed all of the quizzes or none of the quizzes were compared. Students of all abilities who completed all of the pre-exam quizzes had higher average exam scores than those who completed none. One semester Community college students enrolled in an introductory biology course for majors “Teaching students how to study: a workshop on information processing and self-testing helps students learn” (Stanger-Hall et al., 2011) What effect does a self-testing exercise done in a workshop have on final exam questions covering the same topic used in the workshop? Students who participated in the retrieval-practice workshop performed better on the exam questions related to the material covered in the workshop activity. However, there was no difference in overall performance on the exam between the two groups. 10 wk Undergraduate students in a introductory biology class