首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
As student evaluation of teaching (SET) instruments are increasingly administered online, research has found that the response rates have dropped significantly. Validity concerns have necessitated research that explores student motivation for completing SETs. This study uses Vroom's [(1964). Work and motivation (3rd ed.). New York, NY: John Wiley & Sons] expectancy theory to frame student focus group responses regarding their motivations for completing and not completing paper and online SETs. Results show that students consider the following outcomes when deciding whether to complete SETs: (a) course improvement, (b) appropriate instructor tenure and promotion, (c) accurate instructor ratings are available to students, (d) spending reasonable amount of time on SETs, (e) retaining anonymity, (f) avoiding social scrutiny, (g) earning points and releasing grades, and (h) being a good university citizen. Results show that the lower online response rate is largely due to students’ differing feelings of obligation in the 2 formats. Students also noted that in certain situations, students often answer SETs insincerely.  相似文献   

2.
This paper examines the stability and validity of a student evaluations of teaching (SET) instrument used by the administration at a university in the PR China. The SET scores for two semesters of courses taught by 435 teachers were collected. Total 388 teachers (170 males and 218 females) were also invited to fill out the 60‐item NEO Five‐Factor Inventory together with a demographic information questionnaire. The SET responses were found to have very high internal consistency and confirmatory factor analysis supported a one‐factor solution. The SET re‐test correlations were .62 for both the teachers who taught the same course (n = 234) and those who taught a different course in the second semester (n = 201). Linguistics teachers received higher SET scores than either social science or humanities or science and technology teachers. Student ratings were significantly related to Neuroticism and Extraversion. Regression results showed that the Big‐Five personality traits as a group explained only 2.6% of the total variance of student ratings and academic discipline explained 12.7% of the total variance of student ratings. Overall the stability and validity of SET was supported and future uses of SET scores in the PR China are discussed.  相似文献   

3.
In the last 10–15 years, many institutions of higher education have switched from paper-and-pencil methods to online methods of administering student evaluations of teaching (SETs). One consequence has been a significant reduction in the response rates to such instruments. The current study was conducted to identify whether offering in-class time to students to complete online SETs would increase response rates. A quasi-experiment (nonequivalent group design) was conducted in which one group of tenured faculty instructed students to bring electronic devices with internet capabilities on a specified day and offered in-class time to students to complete online SETs. A communication protocol for faculty members’ use was developed and implemented. A comparison group of tenured faculty who did not offer in-class time for SET completion was identified and the difference-in-differences method was used to compare the previous year’s response rates for the same instructor teaching the same course across the two groups. Response rates were substantially higher when faculty provided in-class time to students to complete SETs. These results indicate that high response rates can be obtained for online SETs submitted by students in face-to-face classes if faculty communicate the importance of SETs in both their words and actions.  相似文献   

4.
This paper provides new evidence on the disparity between student evaluation of teaching (SET) ratings when evaluations are conducted online versus in‐class. Using a multiple regression analysis, we show that after controlling for many of the class and student characteristics not under the direct control of the instructor, average SET ratings from evaluations conducted online are significantly lower than average SET ratings conducted in‐class. Further, we demonstrate the importance of controlling for the factors not under the instructor’s control when using SET ratings to evaluate faculty performance in the classroom. We do not suggest that moving to online evaluation is overly problematic, only that it is difficult to compare evaluations done online with evaluations done in‐class. While we do not suppose that one method is ‘more accurate’ than another, we do believe that institutions would benefit from either moving all evaluations online or by continuing to do all evaluations in‐class.  相似文献   

5.
Abstract

The student evaluation of teaching (SET) tool is widely used to measure student satisfaction in institutions of higher education. A SET typically includes several criteria, which are assigned equal weights. The motivation for this research is to examine student and lecturer perceptions and the behaviour of the students (i.e. ratings given by them to lecturers) of various criteria on a SET. To this end, an analytic hierarchy process methodology was used to capture the importance (weights) of SET criteria from the points of view of students and lecturers; the students' actual ratings on the SET were then analysed. Results revealed statistically significant differences in the weights of the SET criteria; those weights differ for students and lecturers. However, analysis of 1436 SET forms of the same population revealed that, although students typically rate instructors very similarly on all criteria, they rate instructors higher on the criteria that are more important to them. The practical implications of this research is the reduction of the number of criteria on the SETs used for personnel decisions, while identifying for instructors and administrators those criteria that are perceived by students to be more important.  相似文献   

6.

Although part-time (p/t) faculties constitute a growing proportion of college instructors, there is little work on their level of teaching effectiveness relative to full-time (f/t) faculty. Previous work on a key indicator of perceived teaching effectiveness, student evaluation of teaching (SET), and faculty status (p/t/ vs f/t) is marked by a series of shortcomings including lack of a systematic theoretical framework and lack of multivariate statistical analysis techniques to check for possible spuriousness. The present study corrects for these shortcomings. Data consist of SETs from 175 sections of criminal justice classes taught at a Midwestern urban university. Controls are introduced for variables drawn from the literature and include ascribed characteristics of the professor, grade distribution, and structural features of the course (e.g., level, size). The results of a multivariate regression analysis indicate that even after controlling for the other predictors of SETs, p/t faculty receive significantly higher student evaluation scores than f/t faculty. Further, faculty status was the most important predictor of SETs. The results present the first systematic evidence on faculty status and SETs.  相似文献   

7.
In the context of increased emphasis on quality assurance of teaching, it is crucial that student evaluations of teaching (SET) methods be both reliable and workable in practice. Online SETs particularly tend to raise criticisms with those most reactive to mechanisms of teaching accountability. However, most studies of SET processes have been conducted with convenience, small and cross-sectional samples. Longitudinal studies are rare, as comparison studies on SET methodological approaches are generally pilot studies followed shortly after by implementation. The investigation presented here significantly contributes to the debate by examining the impact of the online administration method of SET on a very large longitudinal sample at the course level rather than attending to the student unit, thus compensating for the inter-dependency of students’ responses according to the instructor variable. It explores the impact of the administration method of SET (paper based in-class vs. out-of-class online collection) on scores, with a longitudinal sample of over 63,000 student responses collected over a total period of 10 years. Having adjusted for the confounding effect of class size, faculty, year of evaluation, years of teaching experience and student performance, it is observed that the actual effect of the administration method exists, but is insignificant.  相似文献   

8.
Previous studies concerning students' ratings of instruction have traditionally used the class as the unit of analysis and the ratings have been analyzed in one of two ways: (1) regression analysis, wherein the amount of variability in instructor ratings can be attributed to a set of variables; or (2) analysis of variance, wherein the effect of some selected independent variable on instructor ratings is measured. While both approaches have provided valuable information about the evaluation of instruction, little attention has been given to the interactions among the variables selected. In order to determine how situational variables influence the student at the time an evaluation is performed, the present study used the individual student as the unit of analysis and focused principally on the interactions between three variables related to the class (type, level, and size) and three variables related to the instructor (reputation, rank, and sex). The data were analyzed through 15 two-way factorial analyses of variance, with 23 main effects and 12 interactions reaching significance. The implications of these findings are discussed in terms of their effect on the student rating process.  相似文献   

9.
10.
Course evaluations (often termed student evaluations of teaching or SETs) are pervasive in higher education. As SETs increasingly shift from pencil-and-paper to online, concerns grow over the lower response rates that typically accompany online SETs. This study of online SET response rates examined data from 678 faculty respondents and student response rates from an entire semester. The analysis focused on those tactics that faculty employ to raise response rates for their courses, and explored instructor and course characteristics as contributing factors. A comprehensive regression model was evaluated to determine the most effective tactics and characteristics. Using incentives had the most impact on response rates. Other effective tactics that increase response rates include reminding students to take the evaluation, explaining how the evaluations would be used to improve instruction, sending personal emails and posting reminders on Blackboard®. Incentives are not widely used; however, findings suggest that non-point incentives work as well as point-based ones, as do simple-to-administer minimum class-wide response rate expectations (compared to individual completion).  相似文献   

11.
The literature on student evaluations of teaching (SETs) generally presents two opposing camps: those who believe in the validity and usefulness of SETs, and those who do not. Some researchers have suggested that ‘SET deniers’ resist SETs because of their own poor SET results. To test this hypothesis, I analysed essays by 230 SET researchers (170 lead authors) and classified the researchers as having negative, neutral or positive attitudes towards SETs. I retrieved their RateMyProfessors.com (RMP) scores and, using logistic regression, found that lead authors with negative attitudes towards SETs were 14 times more likely to score below an estimated RMP average than lead authors with positive attitudes towards SETs. Co-authors and researchers with neutral attitudes, on the other hand, did not significantly differ from the RMP average. These results suggest that personal attitudes towards SETs may drive research findings.  相似文献   

12.
One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations of teaching (SETs); however, this global correlation did not hold true for individual teachers and courses. In fact, there was a large variance in the correlations between GPAs and SETs, including some teachers with a negative correlation and a large variance between courses.  相似文献   

13.
We proposed an extended form of the Govindarajulu and Barnett margin of error (MOE) equation and used it with an analysis of variance experimental design to examine the effects of aggregating student evaluations of teaching (SET) ratings on the MOE statistic. The interpretative validity of SET ratings can be questioned when the number of students enrolled in a course is low or when the response rate is low. A possible method of improving interpretative validity is to aggregate SET ratings data from two or more courses taught by the same instructor. Based on non-parametric comparisons of the generated MOE, we found that aggregating course evaluation data from two courses reduced the MOE in most cases. However, significant improvement was only achieved when combining course evaluation data for the same instructor for the same course. Significance did not hold when combining data from different courses. We discuss the implications of our findings and provide recommendations for practice.  相似文献   

14.
The California Child Q-set (CCQ) was used to explore the structure of personality in early adolescence and to develop scales to measure the "Big Five" dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. Mothers provided Q-sorts of 350 ethnically diverse boys between 12 and 13 years old. Analyses of the construct validity of the scales provided a nomological network relating the Big Five to theoretically and socially important criterion variables, such as juvenile delinquency, Externalizing and Internalizing disorders of childhood psychopathology, school performance, IQ, SES, and race. These effects were obtained using diverse methods, including self-reports from the boys, ratings by their mothers and their teachers, and objective-test data. In addition to the Big Five, analyses also suggested 2 possibly age-specific dimensions of personality in early adolescence. Discussion is focused on the changing manifestations of personality traits throughout development.  相似文献   

15.
Student evaluation of teaching (SET) ratings are used to evaluate faculty's teaching effectiveness based on a widespread belief that students learn more from highly rated professors. The key evidence cited in support of this belief are meta-analyses of multisection studies showing small-to-moderate correlations between SET ratings and student achievement (e.g., Cohen, 1980, Cohen, 1981; Feldman, 1989). We re-analyzed previously published meta-analyses of the multisection studies and found that their findings were an artifact of small sample sized studies and publication bias. Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between SET ratings and learning. Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between the SET ratings and learning. These findings suggest that institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty's teaching effectiveness.  相似文献   

16.
Abstract

This study uses decision tree analysis to determine the most important variables that predict high overall teaching and course scores on a student evaluation of teaching (SET) instrument at a large public research university in the United States. Decision tree analysis is a more robust and intuitive approach for analysing and interpreting SET scores compared to more common parametric statistical approaches. Variables in this analysis included individual items on the SET instrument, self-reported student characteristics, course characteristics and instructor characteristics. The results show that items on the SET instrument that most directly address fundamental issues of teaching and learning, such as helping the student to better understand the course material, are most predictive of high overall teaching and course scores. SET items less directly related to student learning, such as those related to course grading policies, have little importance in predicting high overall teaching and course scores. Variables irrelevant to the construct, such as an instructor’s gender and race/ethnicity, were not predictive of high overall teaching and course scores. These findings provide evidence of criterion and discriminant validity, and show that high SET scores do not reflect student biases against an instructor’s gender or race/ethnicity.  相似文献   

17.
Similarity of student ratings across instructors,courses, and time   总被引:1,自引:0,他引:1  
This study raised three questions about the similarity or generalizability of student ratings of courses and instructors. First, how stable are student ratings of the same instructor giving the same course during twodifferent semesters? Second, how similar are student ratings of the same instructor in twodifferent courses? Third, how similar are student ratings of a given course being taught bydifferent instructors? Instances were identified in which student ratings on seven different factors were available for pairs of courses for each of these questions. For the case of the same instructor — same course—different semesters, student ratings were reasonably similar (median r for seven factors about 0.70). For the case of the same instructor—different courses, the median r was surprisingly low — about 0.40. For the case of the same course—different instructors, substantial correlations were obtained for some factors and insignificant correlations for other factors. Implications of these findings for practical use of student ratings and suggestions for further research in the area are discussed.  相似文献   

18.
Using multilevel models, this study examined the effects of student- and course-level variables on monotonic response patterns in student evaluation of teaching (SET). In total, 11,203 ratings taken from 343 general education courses in a Korean four-year private university in 2011 were analyzed. The results indicated that 96 % of variance of monotonic response patterns could be explained by student characteristics, such as gender, academic year, major, grade point average, SET score, and perceptions about course difficulty, while controlling for course-level variables. Furthermore, 4 % of variance of monotonic response patterns was derived from course characteristics, including faculty age and class size, while controlling for student-level variables. The findings suggest that Korean higher education institutions need to take proper measures to encourage students to participate more actively and sincerely in SET for the best and proper use of the evaluation’s outcomes.  相似文献   

19.
This paper examines the effects of two background variables in students' ratings of teaching effectiveness (SETs): class size and students' motivation (as surrogated by students' likelihood to respond randomly). Resampling simulation methodology has been employed to test the sensitivity of the SET scale for three hypothetical instructors (excellent, average, and poor). In an ideal scenario without confounding factors, SET statistics unmistakably distinguish the instructors. However, at different class sizes and levels of random responses, SET class averages are significantly biased. Results suggest that evaluations based on SET statistics should look at more than class averages. Resampling methodology (bootstrap simulation) is useful for SET research for scale sensitivity study, research results validation, and actual SET score analyses. Examples will be given on how bootstrap simulation can be applied to real-life SET data comparison.  相似文献   

20.
Institutions of higher education continue to migrate student evaluations of teaching (SET) from traditional, in-class paper forms to online SETs. Online SETs would favorably compare to paper-and-pencil evaluations were it not for widely reported response rate decreases that cause SET validity concerns stemming from possible nonresponse bias. To combat low response rates, one institution introduced a SET application for mobile devices and piloted formal synchronous classroom time for SET completion. This paper uses the Leverage Salience Theory to estimate the impact of these SET process changes on overall response rates, open-ended question response rates, and open end response word counts. Synchronous class time best improves SET responses when faculty encourage completion on keyboarded devices and provide students SET completion time in the first 15 min of a class meeting. Full support from administrators requires sufficient wireless signal strength, IT infrastructure, and assuring student access to devices for responses clustering around meeting times.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号