首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

Student evaluations of teaching and courses (SETs) are part of the fabric of tertiary education and quantitative ratings derived from SETs are highly valued by tertiary institutions. However, many staff do not engage meaningfully with SETs, especially if the process of analysing student feedback is cumbersome or time-consuming. To address this issue, we describe a proof-of-concept study to automate aspects of analysing student free text responses to questions. Using Quantext text analysis software, we summarise and categorise student free text responses to two questions posed as part of a larger research project which explored student perceptions of SETs. We compare human analysis of student responses with automated methods and identify some key reasons why students do not complete SETs. We conclude that the text analytic tools in Quantext have an important role in assisting teaching staff with the rigorous analysis and interpretation of SETs and that keeping teachers and students at the centre of the evaluation process is key.  相似文献   

2.
The purpose of the current study was to examine whether the Big Five personality traits and expected student grades relate to student evaluations of teachers and courses at the college level. Extraversion, openness, agreeableness and conscientiousness were found to be personality traits favoured in instructors, whereas neuroticism was not. A significant correlation was found between the students’ expected grades in the course and student evaluations of the course, but not the evaluations of the instructor. When the effect of students’ perceived amount of learning was taken into account, no significant effect of grades was found on teacher ratings. Personality explained variance in teacher and course evaluations over and above grades and perceived learning.  相似文献   

3.
Student evaluations of teaching (SETs) are widely used to measure teaching quality in higher education and compare it across different courses, teachers, departments and institutions. Indeed, SETs are of increasing importance for teacher promotion decisions, student course selection, as well as for auditing practices demonstrating institutional performance. However, survey response is typically low, rendering these uses unwarranted if students who respond to the evaluation are not randomly selected along observed and unobserved dimensions. This paper is the first to fully quantify this problem by analyzing the direction and size of selection bias resulting from both observed and unobserved characteristics for over 3000 courses taught in a large European university. We find that course evaluations are upward biased, and that correcting for selection bias has non-negligible effects on the average evaluation score and on the evaluation-based ranking of courses. Moreover, this bias mostly derives from selection on unobserved characteristics, implying that correcting evaluation scores for observed factors such as student grades does not solve the problem. However, we find that adjusting for selection only has small impacts on the measured effects of observables on SETs, validating a large related literature which considers the observable determinants of evaluation scores without correcting for selection bias.  相似文献   

4.
Course evaluations (often termed student evaluations of teaching or SETs) are pervasive in higher education. As SETs increasingly shift from pencil-and-paper to online, concerns grow over the lower response rates that typically accompany online SETs. This study of online SET response rates examined data from 678 faculty respondents and student response rates from an entire semester. The analysis focused on those tactics that faculty employ to raise response rates for their courses, and explored instructor and course characteristics as contributing factors. A comprehensive regression model was evaluated to determine the most effective tactics and characteristics. Using incentives had the most impact on response rates. Other effective tactics that increase response rates include reminding students to take the evaluation, explaining how the evaluations would be used to improve instruction, sending personal emails and posting reminders on Blackboard®. Incentives are not widely used; however, findings suggest that non-point incentives work as well as point-based ones, as do simple-to-administer minimum class-wide response rate expectations (compared to individual completion).  相似文献   

5.
The relation of student personality to student evaluations of teaching (SETs) was determined in a sample of 144 undergraduates. Student Big Five personality variables and core self-evaluation (CSE) were assessed. Students rated their most preferred instructor (MPI) and least preferred instructor (LPI) on 11 common evaluation items. Pearson and partial correlations simultaneously controlling for six demographic variables, Extraversion, Conscientiousness and Openness showed that SETs were positively related to Agreeableness and CSE and negatively related to Neuroticism, supporting the three hypotheses of study. Each of these significant relations was maintained when MPI, LPI or a composite of MPI and LPI served as the SET criterion. For example, the MPI-LPI composite correlated .28 with Agreeableness, .35 with CSE and –.28 with Neuroticism. Similar correlations resulted for MPI and LPI. Hierarchical multiple regression demonstrated that the CSE was an independent predictor of MPI ratings, Agreeableness was an independent predictor of LPI ratings, and both the CSE and Agreeableness were independent predictors of MPI-LPI composite ratings. Neuroticism did not emerge as an independent predictor because of the substantial correlation between CSE and Neuroticism (r = .53) and because CSE had greater predictive capacity. This is the first study to incorporate the CSE construct into the SET literature.  相似文献   

6.
Several student and course characteristics were examined in relation to student ratings of instruction. Students at a major Canadian university completed the Universal Student Ratings of Instruction instrument at the end of every course over a three‐year period, providing 371,131 student ratings. Analyses of between‐group differences indicate that students who attend class often and expect high grades provide high ratings of their instructors (p < .001). In addition, lab‐type courses receive higher ratings than lectures or tutorials, and courses in the social sciences receive higher ratings than courses in the natural sciences (p < .001). Regression analyses indicated, however, that student and course characteristics explain little variance in student ratings of their instructors (<7%). It is concluded that student ratings are more related to teaching instruction and behavior of the instructor than to these variables.  相似文献   

7.
This paper examines the stability and validity of a student evaluations of teaching (SET) instrument used by the administration at a university in the PR China. The SET scores for two semesters of courses taught by 435 teachers were collected. Total 388 teachers (170 males and 218 females) were also invited to fill out the 60‐item NEO Five‐Factor Inventory together with a demographic information questionnaire. The SET responses were found to have very high internal consistency and confirmatory factor analysis supported a one‐factor solution. The SET re‐test correlations were .62 for both the teachers who taught the same course (n = 234) and those who taught a different course in the second semester (n = 201). Linguistics teachers received higher SET scores than either social science or humanities or science and technology teachers. Student ratings were significantly related to Neuroticism and Extraversion. Regression results showed that the Big‐Five personality traits as a group explained only 2.6% of the total variance of student ratings and academic discipline explained 12.7% of the total variance of student ratings. Overall the stability and validity of SET was supported and future uses of SET scores in the PR China are discussed.  相似文献   

8.
Student knowledgeability, class size, and class level were found to significantly influence students' ratings of instruction. In general, the more knowledgeable the student in an area, the higher his ratings of courses and instructors in that area. Also, large courses and advanced courses were most highly rated by students. The effect of student sex on student ratings of instruction varied as a function of the particular aspect of instruction being evaluated. Significant interactions among the four main effects were also found across the judgmental dimensions students utilized in evaluating instruction, as assessed by factor analysis. Student knowledgeability and class size were found to be the main predictors of student ratings on these dimensions.  相似文献   

9.
As student evaluation of teaching (SET) instruments are increasingly administered online, research has found that the response rates have dropped significantly. Validity concerns have necessitated research that explores student motivation for completing SETs. This study uses Vroom's [(1964). Work and motivation (3rd ed.). New York, NY: John Wiley & Sons] expectancy theory to frame student focus group responses regarding their motivations for completing and not completing paper and online SETs. Results show that students consider the following outcomes when deciding whether to complete SETs: (a) course improvement, (b) appropriate instructor tenure and promotion, (c) accurate instructor ratings are available to students, (d) spending reasonable amount of time on SETs, (e) retaining anonymity, (f) avoiding social scrutiny, (g) earning points and releasing grades, and (h) being a good university citizen. Results show that the lower online response rate is largely due to students’ differing feelings of obligation in the 2 formats. Students also noted that in certain situations, students often answer SETs insincerely.  相似文献   

10.
11.
Abstract

Evaluation of college instructors often centers on course ratings; however, there is little evidence that these ratings only reflect teaching. The purpose of this study was to assess the relative importance of three facets of course ratings: instructor, course and occasion. We sampled 2,459 fully-crossed dyads from a large university where two instructors taught the same two courses at least twice in a 3-year period. Generalizability theory was used to estimate unconfounded variance components for instructor, course and occasion, as well as their interactions. Meta-analysis was used to summarize those estimates. Results indicated that a three-way interaction between instructor, course and occasion that includes measurement error accounted for the most variance in student ratings (24%), with instructor accounting for the second largest amount (22%). While instructor - and presumably teaching - accounted for substantial variance in student course ratings, factors other than instructor quality had a larger influence on student ratings.  相似文献   

12.
Multilevel SEM was used to examine the extent to which student, instructor, and course characteristics affect student ratings. Data were gathered from 1867 students enrolled in 117 courses at a large teacher training college in Israel. Four alternative two-level models that differ in only the nature of the relationship among interest in the course subject, expected grade, and student ratings were tested. Two of the models were judged as less appropriate, one because it failed to support the spurious relationship assumed between expected grade and student ratings, and the other on grounds of poor model-data fit. The other two models were equally good both in terms of the model-data fit and the amount of variance in student ratings that is accounted for by each of them. Both models supported the mediation effect of expected grade in the relationship between interest in the course subject and student ratings.  相似文献   

13.
Abstract

The authors compared the average grades given in 165 behavioral and social science courses with the average ratings given by students to the instructors who taught the courses. Significant positive correlations were found between the average ratings for instructional quality and the average grades received by students. The courses in which the average grades were the highest were also those in which students gave teachers the highest ratings. Among possible reasons for the correlations are that better teachers attracted better students or that quality teachers provided more effective instruction, resulting in more student learning and, thus, higher average grades. Another explanation is that most college students tend to bias their ratings of instructional quality in favor of teachers who grade leniently (I. Neath, 1996). If correct, the latter reasoning begins to explain why the widespread use of student evaluations in the United States in recent decades has been accompanied by increases in the average grades that university students received. To prevent grade inflation, and particularly to avoid rewarding and promoting instructors who use increasingly lax grading standards, administrators should adjust student ratings of instructional quality for the average grades given for a course. In general, only courses near the extremely high and low ends in terms of students' average grades were significantly affected by the statistical adjustment.  相似文献   

14.
This study outlines the intraclass differences between online and traditional student end‐of‐course critiques (EOCCs) only in the context of resident courses. Previous research into these differences appears limited to studies that compare entire classes rather than studies within any given class. Hypotheses are stated that the administration method has an effect on: how students will rate EOCCs, EOCC response rates, and, the detail level, favourability level and number of EOCC comments. In this field experiment, students in resident business courses at a large university comprised the sample. Individuals from within each class were randomly assigned either to a control group that completed a traditional paper‐based EOCC or to an experimental group that completed a similar EOCC online. Comments were coded with regard to the level of detail, favourability and total number of comments. Analysis of variance was used to compare the two groups with regard to ratings, comments and response rates. Online EOCCs had lower response rates, lower overall ratings, but more detailed comments. The methodology had no significant effect on the number or favourability level of comments.  相似文献   

15.
SELF-RATINGS OF COLLEGE TEACHERS: A COMPARISON WITH STUDENT RATINGS   总被引:2,自引:0,他引:2  
College teachers' self-ratings were investigated in this study by comparing them to ratings given by students. The sample consisted of 343 teaching faculty from five colleges; these teachers, as well as the students in one of their classes, responded to a 21-item instructional report questionnaire. Teacher self-ratings had only a modest relationship with the ratings given by students (a median correlation of .21 for the items). In addition to the general lack of agreement between self and student evaluations, there was also a tendency for teachers as a group to give themselves better ratings than their students did.
Discrepancies between individual teacher ratings and ratings given by the class were further analyzed for: (a) sex of the teacher (no difference found); (b) number of years of teaching experience (no difference); and (c) subject area of the course (differences noted for natural science courses vs. those in education and applied areas).  相似文献   

16.
Relating students?? evaluations of teaching (SETs) to student learning as an approach to validate SETs has produced inconsistent results. The present study tested the hypothesis that the strength of association of SETs and student learning varies with the criteria used to indicate student learning. A multisection validity approach was employed to investigate the association of SETs and two different criteria of student learning, a multiple-choice test and a practical examination. Participants were N?=?883 medical students, enrolled in k?=?32 sections of the same course. As expected, results showed a strong positive association between SETs and the practical examination but no significant correlation between SETs and multiple-choice test scores. Furthermore, students?? subjective perception of learning significantly correlated with the practical examination score whereas no relation was found for subjective learning and the multiple choice test. It is discussed whether these results might be due to different measures of student learning varying in the degree to which they reflect teaching effectiveness.  相似文献   

17.
Motivation theory suggests that autonomy supportiveness in instruction often leads to many positive outcomes in the classroom, such as higher levels of intrinsic motivation and engagement. The purpose of this study was to determine whether perceived autonomy support and course-related intrinsic motivation in college classrooms positively predict student ratings of instruction. Data were collected from 47 undergraduate education courses and 914 students. Consistent with expectations, the results indicated that both intrinsic motivation and autonomy support were positively associated with multiple dimensions of student ratings of instruction. Results also showed that intrinsic motivation moderated the association between autonomy support and instructional ratings—the higher intrinsic motivation, the less predictive autonomy support, and the lower intrinsic motivation, the more predictive autonomy support. These results suggest that incorporating classroom activities that engender autonomy support may lead to improved student perceptions of classroom instruction and may also enhance both student motivation and learning.  相似文献   

18.
Using data on 4 years of courses at American University, regression results show that actual grades have a significant, positive effect on student evaluations of teaching (SETs), controlling for expected grade and fixed effects for both faculty and courses, and for possible endogeneity. Implications are that the SET is a faulty measure of teaching quality and grades a faulty signal of future job performance. Students, faculty, and provost appear to be engaged in an individually rational but socially destructive game of grade inflation centered on the link between SETs and grades. When performance is hard to measure, pay-for-performance, embodied by the link between SETs and faculty pay, may have unintended adverse consequences.  相似文献   

19.
It has often been suggested that actual or anticipated final grades may influence the ratings given by students in student experience surveys but few studies have been able to test this using actual grades. A study was carried out involving six courses over all four year levels of an undergraduate engineering programme, where students were asked to identify themselves in an experience survey by providing their student ID on the survey form. The aim of the study was to investigate a number of questions related to the readiness of students to identify themselves, and to examine any correlation between final examination grades, ratings of student satisfaction and the students’ perception of their level of understanding of material in their courses. Students were discovered to have a poor idea of how well they understand the concepts presented in their courses. This lack of an accurate idea of their own understanding is particularly important because ‘student understanding’ correlated to the ratings they gave to the course. Ratings were largely unaffected by final marks but students who gave their ID outperformed those who did not in end‐of‐year examinations. Higher year level students were more inclined to identify themselves and ratings tended to increase with year level.  相似文献   

20.
Recent developments in higher education are likely to lead to increased evaluation of teaching and courses and, in particular, increased use of student evaluation of teaching and courses by questionnaire. Most studies of the validity of such evaluations have been conducted in terms of the relationship between traditional measures of how much students learn and their ratings of teaching and courses. But there have been few if any studies of the relationship between students' rating of teaching and the quality of student learning, or in how the students approached their learning.For the evaluation of teaching and courses by questionnaire to be valid we would expect that (1) those students reporting that they adopted deeper approaches to study would rate the teaching and the course more highly than those adopting more surface strategies and, more importantly, (2) those teachers and courses which received higher mean ratings would also have, on average, students adopting deeper strategies.In the paper we report the results for eleven courses in two institutions. The results, in general, support the validity of student ratings, and suggest that courses and teaching in which students have adopted deeper strategies to learning also have higher student ratings.An earlier version of this paper was presented at the 1989 Annual Conference of the Higher Education Research and Development Society of Australia.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号