Wednesday, September 30, 2009

Blog Entry #5

Bostonian Henry Adams wrote, "Nothing in education is so astonishing as the amount of ignornance it accumulates in the form of inert facts." He was commenting on the style of American education in the 19th century, which placed undue emphasis on students' ability to memorize information. One hundred year later, critics of American education might very well argue that not much has changed.


This week's readings delve into what is arguably the most controversial aspect of formal education: assessment and grading. Last week we looked more at the question of how to assess students and determine their grades. Now we're talking about why we assess students, why we determine grades at all. These issues are really fundamental to teachers' development because they beg the question, what is the purpose of formal education.

The author of What the Best Teachers Do sees teachers split between those who see education almost as a punitive exercise, an ordeal oriented not to helping students learn, but to making students jump through hoops; and those who see education as a uniquely transformational process, one that turns proverbial punk teenagers into self-directed scholars who not only want to learn, but who love to learn. For his part, McKeatchie in Teaching Tips urges teachers to embrace the latter view, but is more realistic about its implementation.

In my entry this week, I'll lay out my own view, and discuss the ideas and suggestions from the readings that I intend to try out in my own classes.

I love the notion that formal education is a transformational process, and agree that it does transform people--most of the time modestly, sometimes extraordinarily. But do I agree with Henry Peter Brougham, who in 1828 called education the force that "makes a people easy to lead, but difficult to drive; easy to govern, but impossible to enslave"? No. And I think even a cursory review of 20th century history would back me up.

Americans, being an unfortunately anti-intellectual lot in general, often see education as a process that churns out productive workers. Others see education as the key to unlocking great wonders of human potential. I believe that education--writ large--should do both. And while responsibility for realizing this end result does rest in large measure with teachers, particularly in the early stages of formal education, I think more responsibility rests with students at the university level. As such, I think students, as they start progressing through the educational ranks, must be increasingly evaluated on aspects of performance.


I base my view not so much on philosophical grounds, but on practical ones. Having been a student for many years, and a teacher for a few years, I don't see evidence that every student has an intrinsic desire to be the kind of learner we'd all love to see in our classrooms--the learner with limitless curiosity and the passion to engage wholeheartedly in every subject. If this intrinsic desire exists, it can probably only be nurtured in the earliest stages of life. By the time students are in university, they're as curious and as passionate as they're likely to get. As they get even older, emotional maturity may help them become more patient and focused, which should improve their ability to perform well in school. But, their levels of curiousity and passion for learning are unlikely to change significantly.
What the Best Teachers Do bemoans the fact that so many teachers seem too concerned with performance, and not concerned enough with learning. I believe that in order to realize a university education's dual purpose of graduating self-directed scholars who are also productive workers, teachers must emphasize learning but not totally relinquish necessary performance requirements.



In reality, performance-based aspects of grading are not merely about the teacher's convenience, as What the Best Teachers Do says, but about helping students learn to cope with the rules and norms of real life. Deadlines exist in various forms for all professions and even in the most liberal of classrooms, the end of the semester is still a deadline everyone must adhere to.


At the same time, teachers should reevaluate whether or not the rules and regulations of their classrooms have legitimate educational underpinnings. For example, a teacher may compel students to complete assignments in a particular order because successive assignments require students to apply certain concepts that are introduced in that order. Students may be asked to do assignments and readings at the same time so that the class can review and discuss what they're learning and doing as a group.


Rules and regulations that don’t have educational bases, however, may still be necessary. For example, science students may not have much or any flexibility about when they can access the lab, due to university safety procedures, as well as the unwieldy logistics of having to coordinate the schedules of dozens of science classes.


Rules and regulations that do not serve a compelling educational purpose and are not necessary by some other standard should be discarded or adjusted. In What the Best Teachers Do, we learned of a student who wanted to do a report on War and Peace but wound up not having enough time to complete the book before the report was due. The teacher could have adjusted her system in a couple of different ways. For example, she could have created a list of works her students could choose from, with the idea that all the works on the list are those that could reasonably be completed within the time frame of the assignment. Alternatively, she could have ranked various works in order of length and complexity and assigned deadlines to students based on the rankings so that longer, more complex works would be due later than shorter, less complex works.


This week’s readings also centered on tests and other tools of assessing student learning. I agree with the assertions that McKeatchie listed at the start of chapter 7 in Teaching Tips, particularly his urging that teachers use a variety of methods to assess learning. Larger classrooms and teachers’ fear of being overwhelmed by having to grade hundreds of exam papers has led to an overreliance on multiple choice and similarly objective question types. The problem with this, as I wrote last week, is that it’s extremely difficult to test higher level learning with these kinds of questions. Large classrooms also make it more difficult to assign other kinds of work. The result is that only low-level learning takes place. Teaching Tips reemphasized the limitation of these kinds of tests in assessing whether students have reached higher-level learning objectives.

Another argument raised by the readings this week concerns the meaning of grades. I agree that grades are ultimately a way to tell student how well they’re learning (and performing) and to tell teachers how well they’re teaching. But then we must ask what “learning” is. What the Best Teachers Do says, "Learning entails primarily intellectual and personal changes that people undergo as they develop new understandings and reasoning abilities" (p. 153). This is true, but how do teachers measure these intellectual and personal changes? Ultimately, all measures (i.e., grades) are based on the teachers' criteria of what is and is not important in the classroom. Even having students grade themselves or each other simply confuses them, as they try with varying degrees of success to divine what grade they think the teacher would give them.

Teachers could abandon grades altogether and turn the university experience into a sort of Montesori school for adults, but such a system wouldn't last long. Students themselves would soon demand grades, as well they should, in order to help them know whether or not they're learning, as well as to help know how well they are doing in comparison to others.


So grades must be based on how well students demonstrate meeting the course objectives. For example, one course objective in COMM 110 is for students to effectively and competently deliver a variety of speech types. Students’ grades for these speeches should tell students how well they match up against the definitions of “effective,” “competent,” and other criteria established in class. To be able to give grades that communicate this, though, teachers must be teaching to the objectives. If COMM 110 teachers spend all their class time reviewing the definitions of speaking terms from the textbook, and not showing what the application of these terms in speeches looks like, then the bad grades students would likely get in such a class would be as much a consequence of bad teaching as students’ leaerning.


An example of this unfortunate state is the COMM 112 class I assist in. Lessons consist almost entirely of lecture that are very “laundry list” in nature and focus mostly on the history of American media and, in some cases, the inner workings of media operations. The tests reflect this. The name of the course, though, is "Understanding Media and Social Change." The syllabus has no objectives, only a "course overview," but the name of the course implies that we should be helping students analyze the role that media have in our lives. The class is big—it has more than 130 students—and teaching that many people in a way that emphasizes analysis and other higher-level concepts is not easy. Even more challenging is trying to measure how well 130 students are developing analytical skills.

COMM TA’s just stitched together the first exam for our COMM 110 (Fundamentals of Public Speaking) classes. I say "stitched together" because we were e-mailed the test bank of questions and invited to select which ones we wanted and in what order. Virtually all the questions tested students' knowledge of terms used in the public speaking textbook. This is not bad, assuming that knowledge of these terms is an objective of the course, but I’m not sure these tests should weigh as heavily as they do in the final grade. In COMM 110, students’ ability to actually deliver a variety of speeches is what we’re most concerned with, which is why every student must give five graded speeches.


So for next semester, when I will be solely responsible for COMM 112, I want to add focus more on the media’s impact on society and how the ever-evolving media might continue to influence society in the future. This semester the class does group projects, in which groups do research and present to the entire class their findings about a particular medium. Most of this research is on history and current status of the medium. I’d like to see if I can ask students to imagine how the medium might evolve. For example, what might the internet look like 20 years from now?

This semester the class is being asked to do a lot of daily writing, but the purpose seems only to make students attend class. The writings are never reviewed or used as a launch pad for discussion. Next semester I’d like to change this. Not all the writings will be graded, in an effort to reduce stress, as McKeatchie suggests.

Teaching Tips talked briefly about a revolutionary curriculum shift more than 20 years ago, launched by Alverno College. I wanted to see what kinds of tools I might find on their website. This site details this curriculum and has links to a number of assessment tools the college developed.

Wednesday, September 23, 2009

Blog Entry #4

In this week's entry, I'll be writing in response to chapter 11 of McKeachie's Teaching Tips and chapter 8 of Curzan's and Damour's First Day to Final Grade. The subject matter of these chapters is grading. I’ll also respond to chapter 10 in Teaching Tips, which is all about the issue of cheating.

Last week I wrote that First Day... is the more engaging of the two texts, but this week I feel the opposite. I found Teaching Tips to be much more thoughtful and thought-provoking in its discussion on grading. First Day... was a disappointment, as you'll see from my entry.

As the majority of the reading focused on grading, I'll write about that first.

Grading

Grading, measuring, and evaluating students is difficult work and, as Teaching Tips appears to recognize more than First Day..., controversial. To help me delve into the topic, I need to introduce readers to Dr. Fred O. Brooks, who wrote the book (well, a book) on the subject of grading. He taught my classroom evaluation and measurement course as an undergraduate teacher-in-training. Dr. Brooks' textbook is called Principles and Practices in Classroom Evaluation. Things that he and his textbook said have stuck with me since I took his course in 1993 and, in fact, I still have the book. My own views about grading were heavily influenced by Dr. Brooks.

Among Dr. Brooks' uncompromising stances towards classroom evaluation is that norm-based grading is never appropriate. His first argument against norm-based grading (or grading "on a curve") was that no classroom is varied or large enough for a teacher to reasonably expect it to conform to a universal "norm." He said that curves are statistically significant only when looking at very large populations of students. He argued, as McKeachie acknowledges as well, that instead teachers create a curve by comparing students within the class to each other, which is unreliable, especially if you happen to have a class attended by mostly gifted students or mostly ungifted ones.

His second argument against norm-based grading was that students ought to be awarded every chance to succeed and that random factors, such as the comparative brilliance or dimness of their classmates, ought to be eliminated as much as possible. He felt that too many teachers taught their classes as if they were "in the business of keeping secrets from [their] students." He believed teachers should make classes so inviting, "so clear, and so obvious to students that they can't help but learn." He recognized that this might be unrealistic, but even so, it is "a worthwhile goal." For Dr. Brooks, being inviting and obvious means telling students exactly what they’re expected to know and do. In other words, teachers should establish clear competencies that all students can (potentially) demonstrate.

In response to concerns about grade inflation, Dr. Brooks felt that teachers should want all their students to do very well. Isn't that why we teach? Moreover, isn’t that why we strive to be good teachers? Isn't a product of good teaching widespread student success? This is not to say that Dr. Brooks thought learning should not be challenging. On the contrary, he felt teachers should have high expectations and demand the best of students. But if, in turn, students gave their best (i.e., earned a high grade), should they not get a good grade? On this question, posed in this context, there was little disagreement in Dr. Brooks' class.

Rather, disagreement, or at least lack of total agreement, came when we began to talk about the details of applying the concepts we were learning to the actual construction of exams. To understand why, it must be clear that Dr. Brooks was a zealot for exam validity and reliability.

Validity, as Teaching Tips defines it, is whether or not the test (or any evaluated assignment, for that matter) measures what the teacher thinks it measures. As an example, let's look at a word problem on a math exam. The teacher's intent in this example is to see how well students can multiply. This word problem asks the students to convert knots to miles per hour. Unless knowledge of the relationship between knots and miles per hour was specifically part of the instruction prior to this exam, this question would be invalid in most classrooms, as all but naval academy cadets would be unlikely to know the ratio. To revalidate the question, the teacher could simply provide the information (1 knot = 1.1507794 statute miles per hour) in the question. The student would then be required and should be able to correctly perform the multiplication, which is what the teacher was trying to measure in the first place.

Reliability is the quality of consistency. More precisely, reliable test items or assignments are those that would be graded the same by anyone at anytime. A multiple choice question asking "Pure water is best described by which of the following formulae?" should find "H2O" the response that any chemistry teacher year after year would agree is correct (disclaimer: despite owning a chemistry set as a child and taking chemistry classes in high school and college, I don't know that there isn't some other formula that better describes pure water, but I think you understand the point I'm making). On the other hand, an essay question asking students to "analyze the effects that racism plays in anti-Barack Obama protests," for example, is not so cut-and-dried, and would likely elicit a variety of critiques from the political science, sociology, or communication department teachers we can imagine might ask such a question. As you may have guessed, the reliability of a test question is dependent on the depth of the question’s objectivity or subjectivity.

Validity is something that most people can and do agree on when they talk about tests and test items. Reliability, on the other hand, is not seen by many as a crucial criterion of tests. But for Dr. Brooks, the two characteristics were equally essential. As a result, subjective test questions like the essay question I offered in the preceding paragraph, were considered dangerous, if not entirely inappropriate. Dr. Brooks, for one, certainly never used them. His exams consisted entirely of multiple choice, true-false, matching, and fill-in-the-blank items.

It's here that I'll reveal that Dr. Brooks had been a high school math teacher before he became an educator of educators. This is relevant because it seemed as the course went on that he believed multiple choice, true-false, and short completion items were all any teacher needed to measure whether students were learning. Now, he would probably argue that I'm overstating his position but, I wager, not by much. In Dr. Brooks' experience, he probably was able to evaluate his students' mastery of math concepts and applications through these kinds of test items. Correct answers to math questions are usually much more discernible and not as open to interpretation as, say, an answer to a typical essay question.

As an English and communication major, I was among several students who failed to see how us these questions would sufficiently capture students' learning in my disciplines, especially higher level learning (in terms of Bloom's taxonomy). Even for lower-level learning, such as knowledge and comprehension, there are limitations to how well certain test items can be used to measure success. To see this, we must distinguish questions that require students to recognize the correct answer from questions that require students to recall the correct answer.

Recognition requires what I call “lineup” knowledge. To recognize the correct answer, students need only to have seen or heard it enough times (perhaps only once in a class lecture) that it makes a connection in their minds when they see it on the exam. Victims of muggings are often unable to describe their attackers accurately from memory, but if the police presents to them four or five suspects in a lineup, the victim is much more likely to be able to pinpoint which of the suspects was the mugger.

Recall, on the other hand, requires deeper knowledge. To recall a correct answer, students need to be familiar enough with it that they can recreate it. To continue with the metaphor above, recall is like asking the mugging victim not only to describe the mugger to a police sketch artist, but to draw the mugger him- or herself.

In many disciplines, multiple choice questions require students only to recognize correct answers, not recall them. Let's look at this history question as an example:

Which naval battle of WWII is considered the turning point for the US in its war against Japan?
a) Coral Sea
b) Midway
c) Pearl Harbor
d) Sea of Japan

On the other hand, math and physics are just two disciplines that can use multiple choice questions with the expectation that students must still know how to perform certain operations, memorize certain formulae, etc. For example:

What is the value of 58 to the power of 3?
a) 195,112
b) 11,316,496
c) 3,364
d) 7.61577311

No one, not even savants, have "memory" of this kind of information. It must always be calculated, however fast some may be able to do it.

We could change the history question above from multiple choice to fill-in-the-blank format in order to make it more difficult and require students to know the material more deeply. Even so, this question is still only a knowledge-level item and would not be able to help the teacher know whether the student comprehends why the Battle of Midway was considered the turning point. To bring the matter home to my own teaching assignments this semester, can I evaluate students’ ability to apply good public speaking techniques and deliver quality speeches by having them take multiple choice exams? The answer is clearly “no,” as McKeachie writes and even Dr. Brooks acknowledges in his book. Students must be required to apply the speaking skills. Grading speeches, as we know, is an inherently subjective exercise.

While it’s clear that the use of subjective measures is very often appropriate and completely valid, I believe that reliability must not be sacrificed. In fact, reliability in grading is completely compatible with the grading philosophies of teachers who grade on a curve, since many who grade on a curve are concerned about what they perceive are deteriorating standards in education. Insisting on reliable measures is to insist that there are agreed-upon standards (of essay writing, public speaking, etc.) that all teachers more or less adhere to. Without insisting on reliable measures, exams and grades become little more than the personal opinions of the teachers who give them (see the section in Teaching Tips, “Can we trust grades?”).

To achieve this in a class like COMM 110, I think the Communication Department needs to collectively identify and define standards of good public speaking and then employ rubrics that illustrate these standards. I know that the classes currently employ what is being called a rubric, but it’s not a rubric. It’s a list of words like “attention getter” and “eye contact” with points assigned to them. An effective rubric clarifies what proper use of “eye contact” is and describes what different point values mean. For example*:

5 points = eye contact, interaction with aids, and physical gestures demonstrate the
speaker’s energy and interest, guiding the listener through the presentation.

3 points = eye contact, interaction with aids, and physical gestures are natural and fluid.

1 point = eye contact with the audience is lacking. Gestures are missing or awkward.
The speaker depends heavily on the written speech or notes.

*The complete version of this rubric can be accessed through the link found at the bottom of this entry. It comes from Tusculum College in Greenville, Tennessee.

Can such a rubric actually promote and enforce reliable standards across a few dozen instructors of public speaking? Yes, and I speak from personal experience. Every year Texan high school students take a standardized exam intended to measure their mastery of secondary education-level competencies. At least one essay question is part of this exam, which must be evaluated by human beings. When I lived in Texas, I was twice employed temporarily (along with about a hundred others) to grade these essays. To ensure that all the evaluators were assigning points reliably, detailed rubrics were provided and we spent close to a full day honing our evaluative instincts and skills to match the rubric. The training was over once all the evaluators were able to assign the same number of points to the same essays.

It appears that the department prefers instead to give instructors the flexibility to set their own standards and create their own rubrics. This would ordinarily not be much cause for alarm, if public speaking were being taught by experienced teachers. As it is, most public speaking instructors at NDSU (and many larger universities and colleges in the US) are people who’ve never taught before (a few are even new to the communication discipline). The risk here is not just that a lack of evaluative standards may lead to unreliable grading, but that some instructors may not even be capable of setting their own standards.

In this debate, there is a great difference between the two books we’ve read for this week. Teaching Tips coolly offers sensible reasons for employing both norm-based and competency-based grading, though McKeachie does say he believes norm-based grading “is educationally dysfunctional.” First Day…, on the other hand, simply provides a short section designed to help teaching assistants “[find their] grading curve” and seems to feel that getting into this debate is beyond its scope or, perhaps, graduate teaching assistants’ capabilities.

Cheating

Chapter 11 of Teaching Tips warned about students who are “performance oriented,” or working primarily for a grade, as opposed to students who seek learning for its own sake. I believe that most students have a healthy mix of the two orientations. McKeachie writes that students who tend to achieve the most in terms of learning have moderate grade motivation and high intrinsic motivation. Students who enjoy learning for its own sake, after all, probably tend to receive good grades. Such students would probably be disappointed on occasions when they don’t receive good grades, as well. I’ve always been someone who genuinely enjoys learning, but I also strive for and recognize the value of high grades. I don’t believe the two goals are mutually exclusive.

The issue of grade motivation is important because cheating, McKeachie says in chapter 10, is often committed because of students’ fixation on high grades at any cost. Statistics showing the prevalence of cheating are always a bit depressing, but it’s important that we be aware of how common cheating can be.

Next semester I will take over COMM 112 and teach it on my own. The class has around 130 students, so monitoring them during exams will not be easy. As such, I read McKeachie’s list of cheating methods with some alarm. Students now have so many more methods at their disposal, but the use of foot tapping and hand codes are particularly frightening because they can be almost impossible to detect. After all, plenty of students tap their feet or make other noises out of sheer anxiety. But in a multiple choice exam, such simple signals could be effectively used to cheat.

So far in COMM 112, we have employed using two different forms of the first exam in an effort to prevent cheating, but McKeatchie cited research showing that scrambling the order of items alone did not reduce cheating. So for the next test, I’ll suggest we also scramble the responses.

I like the idea of trying to prevent cheating before it starts. McKeatchie outlined an “honor system,” wherein classes are invited to vote on whether they’d like to adopt such a system. He says few classes actually vote unanimously to adopt an honor system, but he believes the discussion of academic dishonesty is itself useful. I doubt I’ll try that, but having students sign a pledge of academic integrity prior to each exam seems a good idea. Teachers could place the statement on the exam, so that writing their names on the exam is also signing the pledge. The downside to this approach, I suppose, is that it takes away the sense that students are signing it voluntarily. It’s sort of like having to agree to a computer application’s usage terms before it can be installed. But these kinds of approaches can be more effective and at least feel less draconian than the “Big Brother Is Watching You” style messages that appear in many of the syllabi I’ve seen at NDSU.

I thought McKeatchie made very good suggestions for handling suspected cheating, as well. He gave an example of behavior—seeing a student glance around—that may or may not be cheating. His suggestion of quietly insisting that the student change seats if the wandering looks continue was a good blend of subtlety with effectiveness. However, I wish McKeatchie had described what he would do if confronted with what is a clear indication of cheating, like finding a crib sheet or seeing students pass notes during an exam. Would he then also have sought to be as discreet?

In any event, I think I’d prefer the discreet route as much as possible. Students caught cheating will face plenty of severe consequences without having to be paraded in chains, as it were, before the whole class. On the other hand, the rest of the class knowing that the teacher is paying attention and will take swift, firm action against cheaters is important. McKeatchie rightly noted this, as well The Chinese say that sometimes you have to kill a chicken to frighten the monkeys. Of course, this only works if the monkeys see the chicken get killed, or at least hear about it. Knowing that a teacher will punish students for cheating can be an effective deterrent, at least in that class.

Some links to sample public speaking rubrics can be found below:

http://www.tusculum.edu/research/documents/PublicSpeakingCompetencyRubric.pdf

http://www.awrsd.org/oak/Academics/Rubrics/Public%20Speaking%20Rubric.htm

http://www.oaklandcc.edu/assessment/geassessment/outcomes/geoutcome_communicate_effectively_speaking/Public%20Speaking%20Rubric%20May%202009.pdf

Wednesday, September 16, 2009

Blog Entry #3

This week's readings are all from "Teaching Tips," which may be the least engaging of the three texts we use in class, but does contain much useful information, as well as thought-provoking commentary. This week's chapters center on the roles that lecture, discussion, and the textbook play in the classroom. The relationship among the textbook, in-class discussion, and lecture is a fascinating one and one that means different things to different people--both students and teachers.



In 8th grade world history class, I very quickly realized that the daily lecture (and lecture is the only thing I recall the teacher ever doing in that class) was exactly the information from the textbook. Nothing was added or subtracted. The teacher had an outline of the text's chapters--paragraph by paragraph--written out on a plastic scroll attached to an overhead projector, and he would let us see one line at a time while we copied the outline in our notes. He might briefly elaborate on items from the outline, but there was not one instance wherein he told us something that was not written in the textbook.



I said "while we copied the outline," but the truth is that after the first few days, I stopped copying. I would sit and listen silently to the lecture, usually following along in the book as a form of review, as I had consistently read all the assigned readings. I was the only student not to take notes, which bothered the teacher considerably. I know this because on at least 3 occasions that I can recall, he all but commanded me to take notes. He did this even after I consistently earned the highest marks on all the exams.



I compare this experience to two history courses I took as an undergraduate, in which the lectures and the assigned readings were quite divergent. For example, in my Native American history course, in the assigned reading we might learn about white settlers' successive violations of US-native treaties, and then in the lecture hear about how natives were portrayed in 19th century American literature, such as the novels of James Fenimore Cooper. Neither questions nor comments were ever invited. As a result, students often felt overwhelmed by the sheer volume of information they felt they needed to memorize. Even I scribbled furiously, knowing that if I didn't get all the lecture into my notes, I would likely have little or no way to get the information later.



In my teaching, and I think this is what McKeachie is advocating, I try to be somewhere in the middle of the two extremes I've described above. This balancing act was mentioned in "First Day to Final Grade," too. Students paid money for the textbook, so they ought to be able to read it and know it will be the source for at least some of what the teacher expects them to have learned for exams, papers, etc. Morever, as McKeachie says, because there is evidence suggesting that students can learn more from reading than from listening to a lecture, class time needs to add value to the reading and not replacee the reading.



I think a fine example can be found in the COMM 112 course I co-teach, or at least assist in. The course is about the role of the mass media in society. In the lectures, Rich Lodewyk and I do our best to enhance the textbook readings by providing additional examples or even alternative points of view to what's in the text. Unfortunately, most often this material is simply supplementary to the text, as there's rarely much discussion in the classroom, though I try to initiate some when I'm leading the class.

McKeachie says the relationship between what's brought up in class and what's in the text should be "interdependent," meaning that students must know what's in the book in order to understand what's discussed in class, and they must attend to lecture and participate in classroom discussion because it will help them better comprehend the information from the textbook. I agree that this is exactly the balance every teacher should strive for.



The challenge for graduate teaching assistants, like those of us in COMM 702, is that it takes a lot of time and energy to create a lesson of discussion and lecture that achieves this interdependent relationship with the text. And if the assistant doesn't have any or much teaching experience, it can seem an especially daunting task. Teachers of COMM 110 have a leg up, perhaps, in that the COMM 690 graduating teaching seminar provides them with sample lesson plans, activities, and other tools to help. But assistants in the other disciplines seem to be left to their own devices most of the time.



Apart from lectures, McKeachie writes about the pedagogical value of classroom discussion. My favorite courses from my undergraduate days were those in which discussion was a fixture. As most of these courses were in the communication field, perhaps it's no coincidence that I decided to make communication one of my majors.



McKeachie offers a number of useful ways for launching classroom discussion. For one such way, introducing a controversy, he cautions about the dangers of playing "devil's advocate," saying it may lead some students to lose trust in the teacher. This is curious, as I think there are a number of ways in which a teacher can provoke students and challenge their viewpoints without making the student believe that the positions the teacher raises are his or her personal beliefs. Apart from stating explicitly that the teacher is playing devil's advocate, the teacher can also simply preface an argument with something like, "There are those who would argue that..." etc. The best example I've seen of this was in a large introduction to politcal science course I took at the University of North Dakota. I later learned that this teacher was a Democratic state legislator, but in class she managed to always seem non-partisan.



In my estimation and experience, the greater danger with this kind of discussion is that students may become so embroiled in a controversy that they are unable to move on. When this happens a discussion can spiral out of control and lead to animosities among students. A later section of the chapter, Handling Arguments and Emotional Reactions, recognizes this. McKeachies suggests perhaps asking students to switch sides and argue the opposing viewpoint, but in practice I think this approach would likely not work, as young students often are unable to distance themselves intellectually from their personal views to see another perspective as equally rational.



In such circumstances, the greater value, then, of discussing a controversy may be less in trying to help students craft arguments on both sides, and more in helping students perceive the difference between what may be objectives facts and the often subjective interpretation of facts. In other words, the value may be in helping students develop critical thinking skills.



The most effective and lively discussions are those in which most, if not all, of the class is participating. In thinking about how one might get unresponsive students to participate more in class discussions, I wonder if technology can help. It used to be that if listeners to a radio talk show wanted to join an on-air discussion, they had to get through on the phone. This is usually very difficult, as popular shows (especially nationally broadcast programs) will likely have hundreds if not thousands of people trying to call in at the same time. The most common alternative is for listeners to e-mail questions and comments.



In the classroom setting, students may be shy or unable to get a word in, especially if there is a discussion monopolizer (other than the teacher). What if students were invited to share questions and comments with the teacher via e-mail or even SMS during class? Such an invitation would bring its share of pitfalls. If you invite students to use their computers and cell phones in class, much of that use may very well have nothing to do with your class. But in certain circumstances and certain types of classes, it could be one way for even the shyest students to participate. An added advantage is that students could ask potentially embarrassing questions (e.g., "I didn't understand that definition. Could you please go over it again?"). The teacher could answer these questions without identifying the questioner or even without acknowledging that a question was asked.


McKeachie asks at the end of chapter 6 whether students should take notes. You'll recall my writing at the beginning of this blog entry that I didn't take notes in my 8th grade history class. To review for a test, I simply re-read the chapters and the chapter summaries in the textbook. I didn't take notes in that class because the lecture provided nothing new or different than the text. But even now, I only take notes when I sense that the lecture or discussion is raising points that aren't in the text.


When I'm teaching, and I see students writing furiously, I actually get a bit frustrated. I agree that note taking is beneficial for students, but too often students are concentrating so much on writing down what they see and hear that they're not listening. I often muse that I could stand at the head of a class and simply state obvious facts known to any first-grader ("The sky is blue," "Dogs have four legs," etc.) and students would still scribble furiously.


I found a list of tips for improving classroom lectures. It was written by a professor at Allegheny College in Pennsylvania (see the link below). One of the pointers he gives is about wearing a costume to give a lecture. This can generate more laughter than anything else, if not done properly, but I've seen professors dress as a famous philosopher or historical figure and then try to be that figure during class. Professors can role play (allowing them, I suppose, to take positions without necessarily revealing their own personal opinions) and students can ask questions. It wouldn't work for everyone, but would be fun to try sometime, and can be a very effective way to teach students about the historical figure and his or her writings, philosophy, etc.




http://webpub.allegheny.edu/employee/e/epallant/coursehome/New%20York%20Times%20-%20FS101/how%20to%20lecture_files/lect2.ms.9jan96.pdf

Monday, September 7, 2009

Blog Entry #2

The Week’s Readings and My Chemistry Set

My parents bought me a chemistry set on my birthday one year. I think I was nine or ten years old. I was very excited about the gift and couldn't wait to start using it. I imagined myself mixing different chemicals almost at random in an effort to get thrilling, perhaps explosive, reactions. My mother forbade me to do this, as you might expect, and I was ordered to follow the instructions set out in a booklet that came with the set. The booklet contained dozens of simple, introductory “experiments” designed to demonstrate basic chemical properties and rules. As I began leafing through the book, though, I became increasingly disappointed as I realized that virtually all of the lessons required tools, compounds, and other items not included with the chemistry set. What this meant was that unless I (or, more accurately, my parents) were willing and able to invest a lot more time and money into my budding chemistry avocation, I wasn't going to be able to do much of anything.

I was reminded of this childhood episode, ironically, because after I finished the reading, I felt the opposite of what I felt then. Chapters 3, 4, and 6 of First Day to Final Grade offer many practical tips and suggestions that are both extremely helpful and, as they like to say in the intelligence community, actionable. That is, I could do virtually everything suggested in the book with little or no additional materials. Executing them only requires a little forethought and planning, which is a natural part of the teaching process, anyway.

The beginning of chapter 3 dealt with lesson goals, or objectives, but I'm going to write about this at the end of my entry because it's the only thing I didn't like about the readings this week and I don’t want to ruin the nice complimentary tone I’ve started with.

PowerPoint Prowess

I really like PowerPoint presentations--both as a student and as a teacher, so I read carefully what the authors of First Day to Final Grade had to say about PowerPoint, particularly what they see as cons. They say that PowerPoint tends to move too quickly and that students are often too busy copying content to pay attention to lecture or they expect to have the slides made available to them after class, so they don’t pay it any attention.

I think I may have overcome these cons, at least somewhat. First, I told my students on day one that I would not make my PowerPoint presentations available on Blackboard or in any other way unless there was some extraordinary reason to do so. Second, as much as possible I use images and short phrases that evoke the topic of my lecture or provide some visual information, and avoid excessive text or other content that students would feel compelled to write down. For example, in the COMM 112 (Understanding Mass Media in a Changing Society) course I co-teach, we use the controversy over the novel The Wind Done Gone to highlight the issue of copyright infringement. My PowerPoint starts with a photo of the front cover of The Wind Done Gone. I then ask students if the name of the book sounds familiar or if they know about the controversy. Eventually I reveal a photo of the front cover of Gone With the Wind next to The Wind Done Gone. I then talk about the details of the controversy, after which I allow one line of text to appear under the book covers. This text summarizes what I just said and includes the main facts they should note. I then give students a moment to write this in their notes.

I don’t have any confirmation of this yet, but I think this style keeps students’ attention on the PowerPoint, without distracting them with lots of text they might feel obliged to write down. And by using the animation features of PowerPoint, I control when the text I do include comes on to the screen. Creating this kind of PowerPoint presentations takes more time, but the results are worth it, I think, as the presentations are usually more fun to deliver and more engaging for the students.

Unfortunately, the nature of the content in my teaching demonstration this week won’t permit me to showcase this self-proclaimed PowerPoint prowess, but anyone reading this is welcome to come see me in another class and see it firsthand.


“Good Discussing”

Whenever I hear the word “discussion” used in the classroom context, I’m reminded of a history teacher I had one semester as an undergraduate. Dr. Gudmar Gudmansen was from Iceland and generally spoke exceptional English, but he did have a few quirky usage issues. My favorite was heard almost every class period after even the most perfunctory exchange of opinions with students. Dr. Gudmansen would sharply nod his head once and say “Good discussing.” I was never sure whether he was just mispronouncing “discussion” or whether he was using “discussing” as a gerund, the way one might say to a fellow oarsman on a canoe, “Good rowing!”

“Running a Discussion” is the title of chapter 4, which I really enjoyed and appreciated. I think I have an innate ability to conduct a productive, enjoyable class discussion, but the chapter introduced me to some wonderful variations that I eagerly hope to attempt. I particularly appreciated the distinction the authors made between discussion and task-based participation. I like to do task-based participation but have been calling it discussion.

Some of the task-based participation ideas, such as making lists and pairing-and-sharing, would be particularly effective activities for exam review days, which are part of the COMM 112 class I co-teach. My co-instructor tells me he has struggled to find interesting ways to review for the exams, so I’ll suggest one of these ideas to him. Chapter 6 talked about exam prep, which I used to see as pandering, as the authors describe in the second paragraph of page 96. But the rationale they give for exam reviews is compelling and, if done properly, exam reviews can improve students’ mastering of the material.

I also was very excited by the section on debate, which was in chapter 6, and would love to try to hold a debate in my COMM 110 class, perhaps as a way of introducing the persuasive speech. It could demonstrate to students the same kind of argumentation they should emulate in their speeches. The schedule and methods for the COMM 110 course is well regimented by the department, but perhaps I’ll have enough flexibility to try that.

Blooming Imprecision

My only real criticism of the readings concerns the section at the beginning of chapter 3 that discusses lesson objectives. In the authors' defense, they use the word "goal," not "objective," and I suspect they did so deliberately to avoid having to delve into Bloom's taxonomy and its verb-obsessed cognitive domains. But I think this is a disservice to teaching assistants, thus ultimately to students.

The sample lesson plan objectives in chapter 3 are too imprecise. Largely due to my undergraduate pedagogical conditioning, I believe that teachers are well-served to craft objectives that target specific knowledge and skills they want their students to develop. Strong objectives make it easier for teachers to create activities appropriate to the lesson and are generally clearly measurable. Weak objectives, invariably, do not correlate as easily to activities and are not so easy to measure. For example, the book's first sample objective is "Help students adopt the appropriate convention for scientific writing by reviewing the proper format for a lab report."

I can imagine what "help students adopt" means, but it's equivocal. "Help" in this context connotes "encourage" or "suggest," which implies more choice in the matter than is probably available to the students. If scientific writing conventions are required by the course assignments, students must adopt it or their grades will suffer. Removing "help" from the objective remedies this, but we are still left with the word "adopt," which suggests more of a conversion in beliefs or attitudes than in behavior. The axiom that good objectives are clearly measurable begs the question: how would this teacher measure her students' shift in beliefs? It would be easier, surely, to see whether or not students actually use the conventions in their writing assignments.

Perhaps this initial lesson was simply intended to introduce the writing conventions to the students, in which case my admonition might sound pedantic. But the choice of verb in the objective not only describes what the teacher wants from the students, it clarifies for the teacher how she might teach that lesson. In this example, if the teacher expects this lesson to result in students writing according to scientific conventions, then simply reviewing lab reports may not suffice. In that case, I would suggest that a good objective would be "Students will be able to demonstrate proper use of (or utilize) appropriate scientific conventions in their writings." The lesson's activities could start with reviewing lab reports, but would have to go beyond that in order for students to master the conventions and show the teacher that they have mastered the conventions.

The sample objectives also contain the activities by which the content will be taught. In the example I've been writing about, the objectives reads that the class will review properly formatted lab reports, presumably so they can see what scientific writing conventions look like. Including the activity in the objective implies that the activity is part of the objective. Activities should be pulled out of the objectives so there's no confusion. An activity is a means (or part of a means) to an end (the objective), not the end in itself.

I'm pleased that in class we're talking in more detail about objectives and Bloom's taxonomy. I recall from my undergraduate teaching methods courses how rewarding the objective writing exercises were and I'm hopeful that my colleagues in this class will feel similarly stimulated. Learning how to write good objectives can be difficult and even tedious, at times, but ultimately good objectives make teaching easier and more effective by being clearly relevant to class activities and being clearly measurable.

In my research for this week’s entry, I came across another good summary of the taxonomy, along with lists of verbs that can be incorporated into learning objectives. This one comes from the University of West Florida. I’ve provided a link to this document at the end of this blog entry.

Grading and Evaluation

This week COMM 110 (Fundamentals of Public Speaking) students are delivering their first speeches. In our TA seminar last week some effort was made to orient us to grading and evaluation, but I think it was inadequate. It may be that the department (perhaps the university) intends for each professor and graduate teaching assistant to set his or her own evaluation standards. I'm not very comfortable with this personally, and I’m not alone. The Schreyer Institute for Teaching Excellence at Penn State University urges departments to work with their graduate teaching assistants to calibrate their grading, so that evaluation is as consistent as possible throughout the department. A link to the Schreyer Institute’s website can be found below. I’ll have more to say on this when we get to grading strategies, testing and assessment later in the semester.

Another stab at Bloom’s taxonomy: http://uwf.edu/atc/design/PDFs/bloomtaxonomyverblist.pdf

Schreyer Institute for Teaching Excellence website: http://www.schreyerinstitute.psu.edu/