首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   73篇
  免费   0篇
教育   40篇
科学研究   2篇
各国文化   2篇
体育   21篇
文化理论   1篇
信息传播   7篇
  2020年   4篇
  2019年   1篇
  2018年   4篇
  2017年   3篇
  2015年   4篇
  2014年   1篇
  2013年   18篇
  2012年   2篇
  2011年   2篇
  2010年   3篇
  2009年   1篇
  2008年   1篇
  2007年   2篇
  2006年   3篇
  2005年   1篇
  2003年   3篇
  2002年   2篇
  2001年   1篇
  2000年   1篇
  1999年   1篇
  1998年   1篇
  1997年   3篇
  1996年   2篇
  1995年   1篇
  1994年   2篇
  1991年   1篇
  1988年   1篇
  1987年   1篇
  1978年   1篇
  1970年   2篇
排序方式: 共有73条查询结果,搜索用时 31 毫秒
1.
2.
Building a metacognitive model of reflection   总被引:2,自引:0,他引:2  
An increased value is being placed on quality teaching in higher education. An important step in developing approaches to better instruction is understanding how those who are successful go about improving their teaching. Thus, several years ago we undertook a program of research in which the concept of reflection provided the frame of reference. We envisaged reflection as a process of formative evaluation, and also saw links between reflection and metacognition. What we have documented and analyzed in detail are the reflective processes of six university professors in their day-to-day planning, instructing and evaluating of learners. The result is a metacognitive model and coding scheme that operationalize the process of reflection. Both provide a language for describing reflection and therefore a way to think about how to improve teaching. In this paper, we describe the research and the model and the contributions they make to our understanding of teacher thinking in higher education.  相似文献   
3.
4.
This article examines the validity of the Undergraduate Research Student Self-Assessment (URSSA), a survey used to evaluate undergraduate research (UR) programs. The underlying structure of the survey was assessed with confirmatory factor analysis; also examined were correlations between different average scores, score reliability, and matches between numerical and textual item responses. The study found that four components of the survey represent separate but related constructs for cognitive skills and affective learning gains derived from the UR experience. Average scores from item blocks formed reliable but moderate to highly correlated composite measures. Additionally, some questions about student learning gains (meant to assess individual learning) correlated to ratings of satisfaction with external aspects of the research experience. The pattern of correlation among individual items suggests that items asking students to rate external aspects of their environment were more like satisfaction ratings than items that directly ask about student skills attainment. Finally, survey items asking about student aspirations to attend graduate school in science reflected inflated estimates of the proportions of students who had actually decided on graduate education after their UR experiences. Recommendations for revisions to the survey include clarified item wording and increasing discrimination between item blocks through reorganization.Undergraduate research (UR) experiences have long been an important component of science education at universities and colleges but have received greater attention in recent years, as they have been identified as important ways to strengthen preparation for advanced study and work in the science fields, especially among students from underrepresented minority groups (Tsui, 2007 ; Kuh, 2008 ). UR internships provide students with the opportunity to conduct authentic research in laboratories with scientist mentors, as students help design projects, gather and analyze data, and write up and present findings (Laursen et al., 2010 ). The promised benefits of UR experiences include both increased skills and greater familiarity with how science is practiced (Russell et al., 2007 ). While students learn the basics of scientific methods and laboratory skills, they are also exposed to the culture and norms of science (Carlone and Johnson, 2007 ; Hunter et al., 2007 ; Lopatto, 2010 ). Students learn about the day-to-day world of practicing science and are introduced to how scientists design studies, collect and analyze data, and communicate their research. After participating in UR, students may make more informed decisions about their future, and some may be more likely to decide to pursue graduate education in science, technology, engineering, and mathematics (STEM) disciplines (Bauer and Bennett, 2003 ; Russell et al., 2007 ; Eagan et al. 2013 ).While UR experiences potentially have many benefits for undergraduate students, assessing these benefits is challenging (Laursen, 2015 ). Large-scale research-based evaluation of the effects of UR is limited by a range of methodological problems (Eagan et al., 2013 ). True experimental studies are almost impossible to implement, since random assignment of students into UR programs is both logistically and ethically impractical, while many simple comparisons between UR and non-UR groups of students suffer from noncomparable groups and limited generalizability (Maton and Hrabowski, 2004 ). Survey studies often rely on poorly developed measures and use nonrepresentative samples, and large-scale survey research usually requires complex statistical models to control for student self-selection into UR programs (Eagan et al., 2013 ). For smaller-scale program evaluation, evaluators also encounter a number of measurement problems. Because of the wide range of disciplines, research topics, and methods, common standardized tests assessing laboratory skills and understandings across these disciplines are difficult to find. While faculty at individual sites may directly assess products, presentations, and behavior using authentic assessments such as portfolios, rubrics, and performance assessments, these assessments can be time-consuming and not easily comparable with similar efforts at other laboratories (Stokking et al., 2004 ; Kuh et al., 2014 ). Additionally, the affective outcomes of UR are not readily tapped by direct academic assessment, as many of the benefits found for students in UR, such as motivation, enculturation, and self-efficacy, are not measured by tests or other assessments (Carlone and Johnson, 2007 ). Other instruments for assessing UR outcomes, such as Lopatto’s SURE (Lopatto, 2010 ), focus on these affective outcomes rather than direct assessments of skills and cognitive gains.The size of most UR programs also makes assessment difficult. Research Experiences for Undergraduates (REUs), one mechanism by which UR programs may be organized within an institution, are funded by the National Science Foundation (NSF), but unlike many other educational programs at NSF (e.g., TUES) that require fully funded evaluations with multiple sources of evidence (Frechtling, 2010 ), REUs are generally so small that they cannot typically support this type of evaluation unless multiple programs pool their resources to provide adequate assessment. Informal UR experiences, offered to students by individual faculty within their own laboratories, are often more common but are typically not coordinated across departments or institutions or accountable to a central office or agency for assessment. Partly toward this end, the Undergraduate Research Student Self-Assessment (URSSA) was developed as a common assessment instrument that can be compared across multiple UR sites within or across institutions. It is meant to be used as one source of assessment information about UR sites and their students.The current research examines the validity of the URSSA in the context of its use as a self-report survey for UR programs and laboratories. Because the survey has been taken by more than 3400 students, we can test some aspects of how the survey is structured and how it functions. Assessing the validity of the URSSA for its intended use is a process of testing hypotheses about how well the survey represents its intended content. This ongoing process (Messick, 1993 ; Kane, 2001 ) involves gathering evidence from a range of sources to learn whether validity claims are supported by evidence and whether the survey results can be used confidently in specific contexts. For the URSSA, our method of inquiry focuses on how the survey is used to assess consortia of REU sites. In this context, survey results are used for quality assurance and comparisons of average ratings over years and as general indicators of program success in encouraging students to pursue graduate science education and scientific careers. Our research questions focus on the meaning and reliability of “core indicators” used to track self-reported learning gains in four areas and the ability of numerical items to capture student aspirations for future plans to attend graduate school in the sciences.  相似文献   
5.
In this article we present Supervised Semantic Indexing which defines a class of nonlinear (quadratic) models that are discriminatively trained to directly map from the word content in a query-document or document-document pair to a ranking score. Like Latent Semantic Indexing (LSI), our models take account of correlations between words (synonymy, polysemy). However, unlike LSI our models are trained from a supervised signal directly on the ranking task of interest, which we argue is the reason for our superior results. As the query and target texts are modeled separately, our approach is easily generalized to different retrieval tasks, such as cross-language retrieval or online advertising placement. Dealing with models on all pairs of words features is computationally challenging. We propose several improvements to our basic model for addressing this issue, including low rank (but diagonal preserving) representations, correlated feature hashing and sparsification. We provide an empirical study of all these methods on retrieval tasks based on Wikipedia documents as well as an Internet advertisement task. We obtain state-of-the-art performance while providing realistically scalable methods.  相似文献   
6.
7.
This paper explores the design of a Web-based tutorial for Activity Analysis offered within an undergraduate course of occupational therapy and how its design features influenced meaningful learning from the students’ perspective. This tutorial, using a case-based format, offers a learner-directed approach to students and the application of Activity Analysis, a clinical practice tool. The design is based on principles of meaningful learning for on-line instruction (Jonassen, Educational Technology, 35, 60–63, 1995) and instructional theories. Analysis of feedback from learners identifies the salient attributes of the tutorial on meaningful learning.  相似文献   
8.
9.
Home advantage in team games is well proven and the influence of the crowd upon officials' decisions has been identified as a plausible cause. The aim of this study was to assess the significance of home advantage for five event groups selected from the Summer Olympic Games between 1896 and 1996, and put home advantage in team games in context with other sports. The five event groups were athletics and weightlifting (predominantly objectively judged), boxing and gymnastics (predominantly subjectively judged) and team games (involving subjective decisions). The proportion of points won was analysed as a binomial response variable using generalized linear interactive modelling. Preliminary exploration of the data highlighted the need to control for the proportion of competitors entered and to split the analysis pre- and post-war. Highly significant home advantage was found in event groups that were either subjectively judged or rely on subjective decisions. In contrast, little or no home advantage (and even away advantage) was observed for the two objectively judged groups. Officiating system was vital to both the existence and extent of home advantage. Our findings suggest that crowd noise has a greater influence upon officials' decisions than players' performances, as events with greater officiating input enjoyed significantly greater home advantage.  相似文献   
10.
Summary Forty children, between the ages of three years ten months and four years six months and with a mean age of four years three months, did computer‐presented reading‐readiness activities under different pre‐activity and feedback conditions. The children were randomly allocated within sexes to four groups. Group 1 listened to a story which involved a teddy bear character called BJ Bear and then did reading‐readiness activities such that the smiling face of the bear appeared on the screen and a tune played when they responded correctly, and the sad face of the bear when they were incorrect. Group 2 listened to the story and then did the activities with the smiling face of the bear and a tune when correct, but with nothing when they were wrong. Group 3 did not receive the story, but did the activities with both the smiling face of the bear and a tune when correct, and the sad face of the bear when incorrect. Group 4 also did not receive the story and did the activities with the smiling face of the bear and a tune when correct, but with nothing when they were wrong. Overall it was found that children did significantly better if given the story about BJ before the activities. There was also a significant interaction between the story condition, feedback for incorrect responses and sex such that performance was better for all with‐story groups except in the case of boys with no feedback.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号