Teaching Tips

Getting the Most Out of Your Student Ratings of Instruction

How do you react after reading your student ratings of instruction? How is it that professionals with advanced degrees who have taught for decades can be devastated or elated based on a comment or two from an 18-year-old student? But we are. We are because it is difficult to discover, or be reminded, that what we do in the classroom does not always work or is not always appreciated by ALL of our students. Well, get over it. Rather than fret, stew, deny, blame, curse, or whine, we can accept student ratings as valuable feedback and consider how we can use them to improve our teaching. We offer the following suggestions for getting the most out of your student ratings.

Choosing Rating Content. We begin by reminding you that what is put into ratings at the start influences what you can get out of them. We are referring to both the content of the rating forms and their administration. Of the 90 percent of the nation’s colleges and universities using student ratings (Seldin, 1999) many of them allow faculty to select some, or all, of their rating items. So what content should be included on your rating forms? First, we believe the only “content” inappropriate for student comment is “course content.” Students seldom know if course content reflects dated or current thinking in the discipline. We believe it is appropriate to ask for student opinion about other topics. Although their responses may not reflect state-of-the-art thinking on teaching 7styles, methods, or assessment techniques, students have legitimate opinions of what affected their behavior, attitudes, and learning in a course.

We recommend assessing areas of both perceived strength and weakness. Obviously, if you only ask questions about your strengths you learn nothing of your weaknesses. However, if you place too much emphasis on your weaknesses you may negatively bias the students’ overall impression of you and your course. If your results are for your eyes only it may be more useful to concentrate on your weaknesses, but when they are shared with departmental administrators you certainly do not want a total review of your mistakes. you can, choose items that make the results useful for personal improvement while keeping in mind the ratings may be used by others to judge the overall quality of your teaching.

Administering Rating Forms. To get honest and useful feedback from your classes, your students must take the evaluation process seriously. This will not happen when you hand out the rating forms saying, “OK class, it is time again to fill out those insipid university forms.” In addition to following the standard directions provided by your institution, we recommend taking a few minutes to inform students how you use their responses to improve your teaching and how the institution uses them for personnel decisions, such as promotion and tenure. Hopefully, you also follow the first part of this recommendation each time you begin a new course. We cannot stress enough how much instructors can bolster the credibility and validity of student ratings by beginning each semester with a brief statement explaining how the course was changed based on student ratings from previous semesters. By doing so, a favorite English professor of ours would say you are showing, not telling, the students how much you value their responses.

Interpreting Results. After your ratings have been collected and you have submitted student grades the campus testing office returns your ratings results to you. We cannot keep you from quickly scanning your numbers and making that first emotional impression somewhere around “they loved me” or “they hated me.” But we can ask you to take a deep breath, pause a second, and begin to carefully inspect and interpret the results as you would data collected in your research.

First, inspect the data. Make sure you understand how the results are reported. This sounds obvious, but our office is consistently dismayed by questions asked by some of our more experienced professors. Some faculty go years without understanding the norm group to whom their ratings are compared or continually confuse item frequencies with percentages. Be certain your results are accurate. Both professors and testing offices make mistakes. Check to see if a large number of students skipped any of the items, that an appropriate number of forms were completed, and if you were compared to the appropriate norm group. We remember once having the most confusing conversation with an agitated professor only to find out he mistakenly switched the forms in his two courses.

Second, interpret the data. Begin by thinking holistically and attempt to see the “big picture.” What did the majority of students say about your teaching? Do not ignore the outliers, but do not let a few isolated opinions color the consensus. If class averages are reported as means rather than medians, remember the impact of extremely high or low ratings, especially if the “N” is small. Look to the standard deviation as a measure of consensus to spot areas of disagreement among students.

Many institutions provide a relative comparison of your results with those of other faculty teaching across the campus or within your department. No doubt by now you have learned two things involving student ratings. First, students are rather generous with their ratings; second, your colleagues are a tough comparison group. At our university a mean course rating of 4.0 (on a five point scale) places you around the 50th percentile for the campus! These results are typical for most colleges and universities.

Think absolutely as well as relatively. Be challenged by how your ratings stack up with other faculty, but do not lose sight of their absolute interpretation. The average class rating of 4.0 mentioned above can be relatively viewed as near the bottom half of faculty ratings, but it can also be “absolutely” interpreted as one scale point below excellent. Try not to be so discouraged by a less-than-desired normative comparison that you lose sight of the good aspects of your teaching. Try to identify these good (and bad) aspects of your teaching by looking for trends or patterns of responses across rating items within a course.

You can also use within-course comparison to interpret your open- and closed-ended item responses. Use responses to a few global or general closed-ended rating items to understand the impact or importance placed on the complaints or praises offered in the students’ open-ended comments. For example, assuming a five-point scale was used for a global item such as, “Rate the overall teaching effectiveness of the instructor,” place the completed forms into two stacks with ones, twos, and threes in one stack, and the fours and fives in the other. Read the open-ended item responses for the two stacks to identify the common complaints about your teaching coming from students who rated you low and from those who gave you high ratings. Most likely, the complaints made by low rating students reflect areas of your teaching with the greatest impact on student perceptions and thus, require the most attention for your teaching improvement. You can follow the same procedure to analyze your teaching strengths.

Graph showing student rating of instructor effectiveness.

Figure 1

Comparing Results Over Time. In addition to comparing your ratings results within a course, you can look for trends and themes across courses and time. Start with the “global” items that measure “overall” teaching quality. Have your general ratings gone up? Down? Stayed the same? It helps to graph the results of these overall items. In just a few minutes, faculty can create a basic Excel spreadsheet that will display the results of their student ratings over time. As they say, a “picture is worth a thousand words.” Figure 1 shows results over time for three example courses.

You can see in the figure that each course improves over time, but the weakest course in the beginning (PSY201) improves the most—especially starting in summer 2003. This dramatic increase may be connected to your intense reworking of the course or to curricular changes in prerequisite courses. There seems to be a drop in the two other courses each summer. Is that due to a different summer cohort of students or your preparation for those summer courses? Graphing allows you to easily spot trends that might have been missed when looking at individual course ratings.

Likewise, you can chart specific items of interest to you. If you are working on your course assessments, you might want to select and chart the results of items related to “fairness of grading,” “difficulty of exams,” or “exams matched course content.” By looking at specific items over time you can see whether your changes have made a difference in how students perceive your course.

Comparing Pieces of Evidence. Although it is vital to reflect on your ratings over time, you also need to think about how your ratings compare to other pieces of evidence, such as peer observations or classroom videotapes. If peers visit your classroom and discuss their observations, check to see if their comments fit with past student ratings. If you are videotaped, look at the tape in context of past student evaluations. Are you teaching at a very abstract level without examples? Are you asking for, but not answering, student questions? Peer or teaching center staff observations or classroom tapings are excellent ways to get extra feedback about your teaching. We sometimes think of student ratings as an “x-ray of your teaching.” They show the bones, but can sometimes miss the meat seen through other methods of teaching evaluation.

Do you ever utilize classroom assessment techniques like “minute papers” or “muddiest points” discussed by Angelo and Cross (1993)? The idea is to have students reflect and briefly write about what the most important points were in the day’s class session or what the most confusing/muddiest point was that day. These classroom tasks not only help students think about course content, but they offer glimpses into what is working or not working in your teaching. This information can be used to validate student ratings from the past and anticipate ratings at the end of the current term. Likewise, we encourage faculty to administer an early informal feedback form to students in the middle of the semester. It does not need to be a formal survey, but rather a small set of rated and open-ended questions about how the course is going and what the students think could be changed to improve the course. This collection of early feedback reinforces your interest in student input and desire to use it to improve the quality of your teaching and your students’ learning this semester.

Seeking Help from Others. Now that we have you fully engaged with interpreting your current student rating results, we strongly encourage you to look to others for help in diagnosing what students are saying about you and the course. Do not rely only on your interpretation of the results. This bears repeating … do not go it alone! Doctors often seek second opinions and so should professors. Connect with a trusted colleague who is considered to be a good teacher to review your student ratings. Just like you, your colleagues are wondering how to best interpret their student ratings. By seeking them out, you will open the door to a dialogue about teaching that can support and motivate both of you to improve. People are curious about what ratings and comments their peers receive; so seeking a second opinion from a peer can capitalize on this curiosity to determine what is “normal.” Most likely, they will find something you missed. If not, they will at least confirm that you are on the right track in how you interpreted your own ratings. It is a win-win situation.

Another second opinion can come from teaching center staff who are paid to assist you – take advantage of their service. These individuals can provide both a campus and research perspective to your ratings and student comments. They have seen hundreds of teaching evaluations at your institution and know the current research on teaching and learning. Not only can they say, “Like others in your college, your students are concerned the classroom assessments do not match what is being taught,” but they can also offer practical suggestions for addressing the concern. It is one-stop-shopping that offers help interpreting results, a campus and research literature perspective, and suggestions for improvement. Cohen (1980) has shown that consultation is a critical element in utilizing student feedback for instructional improvement. Without consultation, feedback can easily be misinterpreted or ignored. If needed, teaching center staff can also help in collecting more feedback to supplement your existing student rating data.

Making Changes to Your Teaching. o now you have scrutinized your most recent student evaluations, compared them to past evaluations and supplemental feedback, and even spoken with others about your teaching evaluations. What is left to do? Student ratings, and other assessments, are virtually worthless unless they lead to improved teaching. The next step is to utilize the results to build on strengths and remediate weaknesses. Avoid saying, “The students are probably right about needing more course structure and organization.” Say it and then use it to develop a plan of attack. Start slowly. It is daunting to tackle all the areas that might need attention all at once so begin by taking some small steps. Pick one or two spots you would like to improve and then return to your teaching center staff or colleague to discuss possible improvements. Possible changes include your syllabi, lesson plans, tests, papers, group assignments, grading feedback, office hours, etc. Once those changes are implemented, tell your current students about the changes you have made and the rationale behind them. From what we hear from students, they will appreciate your thoughtfulness and willingness to use their ratings to make course changes.

Interpreting and using student feedback is a cyclical process. Once you have completed one cycle (selected good items, reviewed your results, talked to a colleague, compared results to other data, planned and implemented instructional changes) it is time to start again by selecting new items for your next course evaluation. Do not forget to tell your current students how you have used student advice to improve the course over time. This continual improvement sequence engages students in an important feedback loop and increases the validity of the student ratings themselves.

We have given you a lot of advice on how to effectively and efficiently use your student ratings of instruction. In the end, it is up to you to change. We believe students can provide truthful and honest information that allows faculty to improve their teaching. Please do not dismiss student feedback. Mow your lawn aggressively if needed, but return to those student rating forms, find an area that needs attention, and plan a change. In most cases you do not need to make a major overhaul, only a few steps in the right direction. Over time, these small steps will add up to huge improvements. Of course, those improvements will show up in higher future student ratings!

References and Recommended Readings

  • Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey-Bass.
  • Braskamp, L. A., & Ory, J. C. (1994). Assessing faculty work: Enhancing individual and institutional performance. San Francisco: Jossey-Bass.
  • Centra, J. A. (1993). Reflective faculty evaluation. San Francisco: Jossey-Bass.
  • Cohen, P. A. (1980). Effectiveness of student-rating feedback for improving college instruction: A meta-analysis of findings. Research in Higher Education, 13, 321-341.
  • Seldin, P. (1999). Changing practices in evaluating teaching: A practical guide to improved faculty performance and promotion/tenure decisions. Bolton, MA: Anchor Publishing.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.