Making course evaluations useful

Do course evaluations have to be a popularity contest? Or can they be useful tools for improving a class?

tags: ,

A few days ago, evolgen lamented that his students weren't giving him useful information on their end-of-course evaluations.

I'm not surprised.

When I first started teaching, I was a given a copy of the standard-teacher-evaluation-form-that-everyone-used.

The questions read something like this, with ratings between always and never:

1. Does your instructor show up on time?  (to what? coffee dates?)

2. Does your instructor dress appropriately for class?  (Why on earth would I ask my students, who pierced every part of their bodies and thought it best to wear long underwear under shorts, in the winter time, with Doc Martens, if they thought I dressed appropriately for class?)

3. Does your instructor talk loudly enough? (probably not, but then that might force them to listen)

4. Is your instructor present during scheduled office hours?  (mentally?)


And, I thought, "I'm a professional! Of course I'm coming to class on time. And worse, even if the students answer the questions, those answers wouldn't tell me anything useful!" Why would I want to waste anyone's time (especially mine) with an evaluation that didn't give useful feedback? I knew whether I was on time for class, I knew where I was during office hours, that information didn't help me any.

I admit, the first quarter that I ever taught a course on my own, I really did wonder how anyone knew if I was doing it well or not. No one was around at night. No one came to our class except for me, my students, and a sleazeball janitor who totally creeped me out.

It could have been weird.

But, never mind that, I learned quickly that, if students weren't happy with a course, word got out. The word went through a grapevine starting with our department secretaries, who served as unofficial confidants, and ending with our department chair.

Anyway, once I started teaching classes that I designed (my third quarter of teaching), I decided it was time for a change.

Always the troublemaker, I asked our department chair if I really had to use the school's evaluation form or if I could ask my own questions. And from then on, I wrote my own evaluation questions and used them every quarter.

In case you're wondering, here's a sample of the questions that I asked:

  • Do you feel confident in your laboratory skills such as: pipetting, weighing, measuring pH, making buffers, other solutions and media, growing microorganisms, and preparing dilutions?
  • Was the text helpful in understanding the material? If you didn't use the text, please explain why not.
  • We cover a large quantity of material during the biotechnology laboratory sequence, is there any aspect that you feel should be covered more thoroughly? If we spent more time on the subject that you mentioned, what subject would you drop?
  • Did you find the "lab meeting" presentations useful? Should I have students give these types of presentations in the Fall and Winter quarters? If not, why not?
  •  Is there anything that you would suggest changing about this course?
  • Is there anything that you particularly liked and want to prevent the instructor from changing?
  • Do you feel that the prerequisites courses prepared you for learning the material? If not, should there be more prerequisites?
  • If you have any additional comments, criticisms or suggestions, please write them on the back of this page. Your input is valuable to me and I will use this information in planning for next year.

What did I learn from my student evaluations?

You can always supplement the standard form with questions of your own. If you want useful information, and you want the end-of-course evaluation to be a useful tool, you have to be proactive.


More like this

Larry Moran thinks I have the wrong idea about teaching evaluations and "thin slicing": Unfortunately, Dave Munger seems to draw the wrong conclusions from this study as he explains in an earlier posting [The six-second teacher evaluation]. In that article from last May he says ... So we do appear…
The blood typing lab, part I. What went wrong? and why?Blood typing part II. Can this laboratory be saved? Those wacky non-major Zoo students are at it again! And this time they drew blood! Mike's undergraduate students learned about blood typing, a common tool of detectives and real crime TV…
Timothy Burke, my go-to-guy for deep thoughts about academia, had a nice post about student evaluations last week. Not ecvaluations of students, evaluations by students-- those little forms that students fill out at many schools (not Swarthmore, though) giving their opinion of the class in a…
So I'm teaching Psychology 100 this semester for the first time and part of the whole thing is that we're supposed to do certain things to get a graduate teaching certificate (which I think is the schools attempt at giving us grad students some teaching training as opposed to the norm of none).…

Did any of your colleagues begin to use your evaluations? Or use thim as
inspiration? How about your students, did they ever comment on usefulness of
your evaluations compared to the standard?
Thanks for some very good suggestions.


it's great to see someone using a course evaluation to actually ask specific questions about what happened in the course and what outcomes the students think they have achieved.

I've asked similar questions for years, but now in the great rush to "improve teaching quality" we're forced to use generic questionnaires which tell you tell you very little and even when they do identify a problem, give you no idea of the nature of the problem to allow you to fix it. For example:

Does the lecturer give enough feedback?

1 Strongly Disagree
2 Disagree
3 Neither agree or disagree
4 Agree
5 Strongly Agree

Now, if you say get a result of 2.5 and you have several items of assessment and you are giving feedback on some of them, how are you supposed to have any idea on what this means.

Again, it's a great post! If I could only get the people who are in charge of course evaluation to look at it.

Dr. Eye: great questions, I wish I knew all the answers.

When I first started teaching, I was the first full-time faculty member to be hired in biology in seven years. There's very little turn-over at community colleges and about 60% of the faculty are part-time. The part time faculty didn't put much of an effort into evaluations (as far as I could tell) because they never knew when or where they would be teaching that same course. And, the full-time faculty were pretty set in their ways. They thought my approach was okay, but if they tried it, they didn't tell me.

The students liked the evaluations quite a bit, especially because I had them in classes for multiple quarters, so they could see that I really used their input to help tweak the curriculum.


Thanks for your feedback!

If I had written that evaluation question, I would have something like: "Does your instructor provide sufficient information about the quality of your work? If not, what kind of feedback would you find most helpful?"

I don't understand how generic evaluations would help "improve teaching quality." It seems like whenever we try to standardize teaching, we end up driving instructors towards a lower quality, less flexible, ideal.

It seems to me that most evaluations are designed to catch catastrophes more than to be a useful pedagogical tool for the instructor (or the students , for that matter).
I would posit that it should be possible to design a 'generic evaluation' which would be much more useful than many of those I've seen. In fact, the ones you've posted, Sandra, are generic enough that I can use them in an intro mechanics class (OK, myabe I'll leave out pipetting...). In fact, an improved zeroth llevel evaluation together with the encouragement to supplement with course/instructor specific questions could be helpful and need not necessarily drive a reduction in quality of teaching and learning.

I see what you mean. I was referring to the generic "fill-in-the-bubbles" evaluations that Martin described.

Perhaps, the problem is really that it's easier to use standardized evaluations that can be scored through automated bubble-scanning methods - rather than having evaluations that require read what students write.

Most of our evaluations are now done through a web form, so we can collect both bubbling filling type data and written answers. I think that well thought out questions could be useful (to be sure, well thought out written answers would be better, but it is difficult to collect many of them, at least in my experience, unless the course is a complete disaster from the students' perspective) for example:
On a scale of 1 to 5, with 5 being the best, rate your confidence in your lab skills.

On a scale of 1 to 5, with 1 being not worth opening and 5 being the Platonic ideal, how would you rate your text book.

On a scale of 1 to 5, with 1 being "too embarrassingly small to admit to anyone" and 5 being "absolutely the most I could possibly have done" rate your own effort in this course.

If 3 is the average for students in this course, (1 is crappy, 5 is great) how would you rate your performance?

OK, I guess now you see why I'd like to steal questions form someone else.
Also, again, I agree that written answers are best, but not as many students are willing to write useful answers as will fill out a bubble sheet or simple 5 minute questionnaire on the web, so we need to find good (that is, helpful to us as instructors) questions that can fit the more usual format.

I too wonder,how the course evaluation make any positive change in instructors preparation..But your evaluations seems to be interesting,and all are indirect questions which will really helps to measure the performance level of both the instructor and students..