首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Using nine years of student evaluation of teaching (SET) data from a large US research university, we examine whether changes to the SET instrument have a substantial impact on overall instructor scores. Our study exploits four distinct natural experiments that arose when the SET instrument was changed. To maximise power, we compare the same course/instructor before and after each of the four changes occurred. We find that switching from in-class, paper course evaluations to online evaluations generates an average change of ?0.14 points on a five-point scale, or 0.25 standard deviations (SDs) in the overall instructor ratings. Changing labelling of the scale and the wording of the overall instructor question generates another decrease in the average rating: ?0.15 of a point (0.27 SDs). In contrast, extending the evaluation period to include the final examination and offering an incentive (early grade release) for completing the evaluations do not have a statistically significant effect on the overall instructor rating. The cumulative impact of these individual changes is ?0.29 points (0.52 SDs). This large decrease shows that SET scores are not comparable over time when instruments change. Therefore, administrators should measure and account for such changes when using historical benchmarks for evaluative purposes (e.g. appointments and compensation).  相似文献   

2.
The purpose of this study was to analyse the students’ evaluations of the course and instructor for all statistics courses offered during fall semester 2009 at a large university in the southern United States. Data were collected and analysed for course evaluations administered both online and on paper to students in both undergraduate and graduate courses. Unlike most previous studies on this subject, class section rather than student was treated as the unit of analysis. It was of specific interest to verify prior research findings that evaluation surveys administered online would not result in lower course and instructor ratings and lower response rates. The results showed that there is not sufficient evidence within the collected data to conclude that either course and instructor ratings or response rates are lower for evaluations administered online (online evaluations) than they are for evaluations administered on paper (paper evaluations). Of secondary interest was whether class ratings would be associated with students’ attendance and a comparison of variability among answers for undergraduate vs. graduate students. It was observed that class and teacher ratings were not related to students’ attendance and individual students did not tend to give the same answer for every question on their survey.  相似文献   

3.
In the context of increased emphasis on quality assurance of teaching, it is crucial that student evaluations of teaching (SET) methods be both reliable and workable in practice. Online SETs particularly tend to raise criticisms with those most reactive to mechanisms of teaching accountability. However, most studies of SET processes have been conducted with convenience, small and cross-sectional samples. Longitudinal studies are rare, as comparison studies on SET methodological approaches are generally pilot studies followed shortly after by implementation. The investigation presented here significantly contributes to the debate by examining the impact of the online administration method of SET on a very large longitudinal sample at the course level rather than attending to the student unit, thus compensating for the inter-dependency of students’ responses according to the instructor variable. It explores the impact of the administration method of SET (paper based in-class vs. out-of-class online collection) on scores, with a longitudinal sample of over 63,000 student responses collected over a total period of 10 years. Having adjusted for the confounding effect of class size, faculty, year of evaluation, years of teaching experience and student performance, it is observed that the actual effect of the administration method exists, but is insignificant.  相似文献   

4.
We proposed an extended form of the Govindarajulu and Barnett margin of error (MOE) equation and used it with an analysis of variance experimental design to examine the effects of aggregating student evaluations of teaching (SET) ratings on the MOE statistic. The interpretative validity of SET ratings can be questioned when the number of students enrolled in a course is low or when the response rate is low. A possible method of improving interpretative validity is to aggregate SET ratings data from two or more courses taught by the same instructor. Based on non-parametric comparisons of the generated MOE, we found that aggregating course evaluation data from two courses reduced the MOE in most cases. However, significant improvement was only achieved when combining course evaluation data for the same instructor for the same course. Significance did not hold when combining data from different courses. We discuss the implications of our findings and provide recommendations for practice.  相似文献   

5.
This paper examines the effects of instructors’ attractiveness on student evaluations of their teaching. We build on previous studies by holding both observed and unobserved characteristics of the instructor and classes constant. Our identification strategy exploits the fact that many instructors, in addition to traditional teaching in the classroom, also teach in the online environment, where attractiveness is either unknown or less salient. We utilize multiple attractiveness measures, including facial symmetry software, subjective evaluations, and a novel, proxy methodology that resembles a “Keynesian Beauty Contest.” We identify a substantial beauty premium in face-to-face classes for women but not for men. While gender on its own does not impact teaching evaluation scores, female instructors rated as more attractive receive higher instructional ratings. This result holds across several beauty measures, given a multitude of controls and while controlling for unobserved instructor characteristics and skills. Notably, the positive relationship between beauty and teaching effectiveness is not found in the online environment, suggesting the observed premium may be due to discrimination.  相似文献   

6.
Using multilevel models, this study examined the effects of student- and course-level variables on monotonic response patterns in student evaluation of teaching (SET). In total, 11,203 ratings taken from 343 general education courses in a Korean four-year private university in 2011 were analyzed. The results indicated that 96 % of variance of monotonic response patterns could be explained by student characteristics, such as gender, academic year, major, grade point average, SET score, and perceptions about course difficulty, while controlling for course-level variables. Furthermore, 4 % of variance of monotonic response patterns was derived from course characteristics, including faculty age and class size, while controlling for student-level variables. The findings suggest that Korean higher education institutions need to take proper measures to encourage students to participate more actively and sincerely in SET for the best and proper use of the evaluation’s outcomes.  相似文献   

7.
The relation of student personality to student evaluations of teaching (SETs) was determined in a sample of 144 undergraduates. Student Big Five personality variables and core self-evaluation (CSE) were assessed. Students rated their most preferred instructor (MPI) and least preferred instructor (LPI) on 11 common evaluation items. Pearson and partial correlations simultaneously controlling for six demographic variables, Extraversion, Conscientiousness and Openness showed that SETs were positively related to Agreeableness and CSE and negatively related to Neuroticism, supporting the three hypotheses of study. Each of these significant relations was maintained when MPI, LPI or a composite of MPI and LPI served as the SET criterion. For example, the MPI-LPI composite correlated .28 with Agreeableness, .35 with CSE and –.28 with Neuroticism. Similar correlations resulted for MPI and LPI. Hierarchical multiple regression demonstrated that the CSE was an independent predictor of MPI ratings, Agreeableness was an independent predictor of LPI ratings, and both the CSE and Agreeableness were independent predictors of MPI-LPI composite ratings. Neuroticism did not emerge as an independent predictor because of the substantial correlation between CSE and Neuroticism (r = .53) and because CSE had greater predictive capacity. This is the first study to incorporate the CSE construct into the SET literature.  相似文献   

8.
Course evaluations (often termed student evaluations of teaching or SETs) are pervasive in higher education. As SETs increasingly shift from pencil-and-paper to online, concerns grow over the lower response rates that typically accompany online SETs. This study of online SET response rates examined data from 678 faculty respondents and student response rates from an entire semester. The analysis focused on those tactics that faculty employ to raise response rates for their courses, and explored instructor and course characteristics as contributing factors. A comprehensive regression model was evaluated to determine the most effective tactics and characteristics. Using incentives had the most impact on response rates. Other effective tactics that increase response rates include reminding students to take the evaluation, explaining how the evaluations would be used to improve instruction, sending personal emails and posting reminders on Blackboard®. Incentives are not widely used; however, findings suggest that non-point incentives work as well as point-based ones, as do simple-to-administer minimum class-wide response rate expectations (compared to individual completion).  相似文献   

9.
In recent years many universities switched from paper- to online-based student evaluation of teaching (SET) without knowing the consequences for data quality. Based on a series of three consecutive field experiments—a split-half design, twin courses, and pre–post-measurements—this paper examines the effects of survey mode on SET. First, all three studies reveal marked differences in non-response between online- and paper-based SET and systematic, but small differences in the overall course ratings. On average, online SET reveal a slightly less optimistic picture of teaching quality in students’ perception. Similarly, a web survey mode does not impair the reliability of student ratings. Second, we highlight the importance of taking selection and class absenteeism into account when studying survey mode effects and also show that it is necessary and informative to survey the subgroup of no-shows when evaluating teaching. Third, we empirically demonstrate the need to account for contextual setting of the survey (in class vs. after class) and the specific type of the online survey mode (TAN vs. email). Previous research either confounded contextual setting with variation in survey mode or generalized results for a specific online mode to web surveys in general. Our findings suggest that higher response rates in email surveys can be achieved if students are given the opportunity and time to evaluate directly in class.  相似文献   

10.
Abstract

This study uses decision tree analysis to determine the most important variables that predict high overall teaching and course scores on a student evaluation of teaching (SET) instrument at a large public research university in the United States. Decision tree analysis is a more robust and intuitive approach for analysing and interpreting SET scores compared to more common parametric statistical approaches. Variables in this analysis included individual items on the SET instrument, self-reported student characteristics, course characteristics and instructor characteristics. The results show that items on the SET instrument that most directly address fundamental issues of teaching and learning, such as helping the student to better understand the course material, are most predictive of high overall teaching and course scores. SET items less directly related to student learning, such as those related to course grading policies, have little importance in predicting high overall teaching and course scores. Variables irrelevant to the construct, such as an instructor’s gender and race/ethnicity, were not predictive of high overall teaching and course scores. These findings provide evidence of criterion and discriminant validity, and show that high SET scores do not reflect student biases against an instructor’s gender or race/ethnicity.  相似文献   

11.
Although student evaluation of instruction has been shown to produce reliable results over class averages, considerable within‐class variability exists that has not been investigated. This study looked at examples of student evaluations in which students diametrically differed in their evaluation of the same instructor. Patterns were noted. A survey was conducted to determine the self‐reported incident of students changing evaluations based on noninstructional factors. About 60% of students admitted that they had changed evaluations for reasons related to the instructor's personality, while 30% admitted to changing evaluations based on grades and rigor. Possible reasons and effects of such changes are discussed.  相似文献   

12.
The literature contains indications of a bias in student evaluations of teaching (SET) against online instruction compared to face-to-face instruction. The present case study consists of content analysis of anonymous student responses to open-ended SET questions submitted by 534 students enrolled in 82 class sections taught by 41 instructors, one online and one face-to-face class section for each instructor. There was no significant difference in the proportion of appraisal text segments by delivery method, suggesting no delivery method bias existed. However, there were significant differences in the proportion of text segments for topical themes and topical categories by delivery method. Implications of the findings for research and practice are presented.  相似文献   

13.
This study advances two contributions to the study of student evaluations of teaching: (a) a multilevel conceptualization that allows for the simultaneous analysis of individual- and class-level correlates of evaluations and (b) an application of recent social/organizational psychology theory and research on fairness. Thus, this study examined the relative influence of individual- and class-level perceptions of fairness and expected grades on students’ satisfaction with their instructors and with their grades. Multilevel regression showed that, at the individual level, grade satisfaction was significantly related to perceived fairness of the instructor’s grading procedures, the perceived fairness of the expected grades, and the expected grades themselves; instructor satisfaction was significantly related to perceptions of the fairness of grading procedures, the fairness of instructor–student interactions, and the fairness of the expected grades. At the class level, instructor satisfaction was significantly influenced by the average perception of the fairness of interactions. The implications for research on student ratings are discussed.  相似文献   

14.
This study compares student evaluations of faculty teaching that were completed in‐class with those collected online. The two methods of evaluation were compared on response rates and on evaluation scores. In addition, this study investigates whether treatments or incentives can affect the response to online evaluations. It was found that the response rate to the online survey was generally lower than that to the in‐class survey. When a grade incentive was used to encourage response to the online survey, a response rate was achieved that was comparable with that to the in‐class survey. Additionally, the study found that online evaluations do not produce significantly different mean evaluation scores than traditional in‐class evaluations, even when different incentives are offered to students who are asked to complete online evaluations.  相似文献   

15.
As student evaluation of teaching (SET) instruments are increasingly administered online, research has found that the response rates have dropped significantly. Validity concerns have necessitated research that explores student motivation for completing SETs. This study uses Vroom's [(1964). Work and motivation (3rd ed.). New York, NY: John Wiley & Sons] expectancy theory to frame student focus group responses regarding their motivations for completing and not completing paper and online SETs. Results show that students consider the following outcomes when deciding whether to complete SETs: (a) course improvement, (b) appropriate instructor tenure and promotion, (c) accurate instructor ratings are available to students, (d) spending reasonable amount of time on SETs, (e) retaining anonymity, (f) avoiding social scrutiny, (g) earning points and releasing grades, and (h) being a good university citizen. Results show that the lower online response rate is largely due to students’ differing feelings of obligation in the 2 formats. Students also noted that in certain situations, students often answer SETs insincerely.  相似文献   

16.
Based on student evaluation of teaching (SET) ratings from 1,432 units of study over a period of a year, representing 74,490 individual sets of ratings, and including a significant number of units offered in wholly online mode, we confirm the significant influence of class size, year level, and discipline area on at least some SET ratings. We also find online mode of offer to significantly influence at least some SET ratings. We reveal both the statistical significance and effect sizes of these influences, and find that the magnitudes of the effect sizes of all factors are small, but potentially cumulative. We also show that the influence of online mode of offer is of the same magnitude as the other 3 factors. These results support and extend the rating interpretation guides (RIGs) model proposed by Neumann and colleagues, and we present a general method for the development of a RIGs system.  相似文献   

17.
In recent years, colleges have been moving from traditional, classroom‐based student evaluations of instruction to online evaluations. Because of the importance of these evaluations in decisions regarding retention, promotion and tenure, instructors are justifiably concerned about how this trend might affect their ratings. We recruited faculty members who were teaching two or more sections of the same course in a single semester and assigned at least one section to receive online evaluations and the other section(s) to receive classroom evaluations. We hypothesised that the online evaluations would yield a lower response rate than the classroom administration. We also predicted that there would be no significant differences in the overall ratings, the number of written comments, and the valence (positive/neutral/negative) of students’ comments. A total of 32 instructors participated in the study over two semesters, providing evaluation data from 2057 students. As expected, online evaluations had a significantly lower response rate than classroom evaluations. Additionally, there were no differences in the mean ratings, the percentage of students who provided written comments or the proportion of comments in the three valence categories. Thus, even with the lower response rate for online evaluations, the two administration formats seemed to produce comparable data.  相似文献   

18.
The present article examined the validity of public web‐based teaching evaluations by comparing the ratings on RateMyProfessors.com for 126 professors at Lander University to the institutionally administered student evaluations of teaching and actual average assigned GPAs for these same professors. Easiness website ratings were significantly positively correlated with actual assigned grades. Further, clarity and helpfulness website ratings were significantly positively correlated with student ratings of overall instructor excellence and overall course excellence on the institutionally administered IDEA forms. The results of this study offer preliminary support for the validity of the evaluations on RateMyProfessors.com.  相似文献   

19.
In recent years, there have been increasing calls from the government and other organizations to provide easy public access to student evaluations of teaching. Indeed, the increasing ease of displaying and viewing large quantities of information, and competition among universities and majors for students, makes it likely that an era of greater transparency of this type of information is at hand. While students’ evaluation of teaching (SET) is one quantitative metric that rates the instructor, it may be influenced by factors that are often beyond the instructor's control. In this study, we analyze a longitudinal data set from both engineering and business schools of a large public university, and identify factors that influence SET. We show which factors have the highest influence on overall SET scores, and contrast these between engineering and business colleges. Colleges within the same university may have differences in the factors affecting SET, and recognition of this is important in effectively and fairly evaluating SET scores. We also provide recommendations regarding information that should be displayed along with the SET, particularly when SET scores are made public, so that instructors are not unduly penalized when their evaluations can be influenced by factors over which they have no control.  相似文献   

20.
In the last 10–15 years, many institutions of higher education have switched from paper-and-pencil methods to online methods of administering student evaluations of teaching (SETs). One consequence has been a significant reduction in the response rates to such instruments. The current study was conducted to identify whether offering in-class time to students to complete online SETs would increase response rates. A quasi-experiment (nonequivalent group design) was conducted in which one group of tenured faculty instructed students to bring electronic devices with internet capabilities on a specified day and offered in-class time to students to complete online SETs. A communication protocol for faculty members’ use was developed and implemented. A comparison group of tenured faculty who did not offer in-class time for SET completion was identified and the difference-in-differences method was used to compare the previous year’s response rates for the same instructor teaching the same course across the two groups. Response rates were substantially higher when faculty provided in-class time to students to complete SETs. These results indicate that high response rates can be obtained for online SETs submitted by students in face-to-face classes if faculty communicate the importance of SETs in both their words and actions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号