In Defense of Hand Washing: Lessons from Healthcare for College Teaching

(the title’s a play on the recent huge spate of articles “in defense of lecturing”)

Healthcare in the 19th Century

Imagine going to a hospital where few if any of the doctors have any professional training (they are essentially amateurs).  The only evaluation of their performance might be how well their colleagues like them and how well patients like them (those who survive, that is). Imagine over two thirds of patients who go to this hospital don’t even survive, and some of the doctors even think that’s a good thing – the weak are being “filtered” out in a Darwinian fashion, leaving only the strongest to survive.  When someone discovers that more patients would survive if doctors would wash their hands between operations, most of them scoff, and that person ends up in a mental hospital:

“Despite various publications of results where hand-washing reduced mortality to below 1%, Semmelweis’s observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community. Some doctors were offended at the suggestion that they should wash their hands” (ref)

Do as I say, not as I do

A more modern example (which actually happened) would be that, after hearing that they need to wash their hands, most all doctors would say that yes, they always wash their hands, but when actually observed, many would actually continue to not do so (at least outside the operating room).

Compare that to this recent study of college teachers who attended multi-day professional development workshops on active learning (which can increase learning, engagement, and retention in college courses).  The study was titled “What We Say Is Not What We Do: Effective Evaluation of Faculty Professional Development Programs Do” (pdf):

Following PD [professional development], 89% of the respondents stated that they made changes in their courses that included active, learner-centered instruction. In contrast, observational data showed that participation in PD did not result in learner-centered teaching. The majority of faculty (75%) used lecture-based, teacher-centered pedagogy, showing a clear disconnect between faculty’s perceptions of their teaching and their actual practices.

The Professionalization of Healthcare in the 20th Century

Over the last 100 years, healthcare became professionalized.  “Medicine was turning into a science, but the old art was still in place.”(ref)  The toolbox of techniques for helping people who were sick grew and evolved.  Doctors have to go through years of rigorous training now before being allowed to practice medicine.  There are professional standards, ethics, exams, etc.  There is extensive research on effective healthcare practices and strategies, although there is still much more research and development to be done.

“Expertise is the mantra of modern medicine. In the early twentieth century, you needed only a high-school diploma and a one-year medical degree to practice medicine. By the century’s end, all doctors had to have a college degree, a four-year medical degree, and an additional three to seven years of residency training in an individual field of practice—pediatrics, surgery, neurology, or the like. Already, though, this level of preparation has seemed inadequate to the new complexity of medicine. After their residencies, most young doctors today are going on to do fellowships, adding one to three further years of training in, say, laparoscopic surgery, or pediatric metabolic disorders, or breast radiology—or critical care. A young doctor is not so young nowadays; you typically don’t start in independent practice until your mid-thirties.” (ref)

K-12 teaching also became professionalized.  K-12 teachers also have to go through training and practica and achieve a degree and certification before they can teach.  They have professional standards, regular observations and evaluations, and required regular ongoing professional development.  College teaching, however, does not require any training in how to teach or how students learn or require any ongoing professional development.  Faculty are hired for their research skills first and foremost.  Faculty are usually evaluated not by observations but by end of semester student evaluations (which may actually negatively correlate with student learning) and by peer opinions.

Checklists, Accountability, and Transparency in Healthcare

Despite the enormous advances in healthcare over the last century, there still is enormous room for improvement.  Peter Provonost developed checklists for doctors and nurses to follow before common medical procedures, which ended up saving thousands of lives (as documented in Atul Gawande’s Checklist Manifesto book and the New Yorker article on which it is based):

“The fundamental problem with the quality of American medicine is that we’ve failed to view delivery of health care as a science. The tasks of medical science fall into three buckets. One is understanding disease biology. One is finding effective therapies. And one is ensuring those therapies are delivered effectively. That third bucket has been almost totally ignored by research funders, government, and academia. It’s viewed as the art of medicine. That’s a mistake, a huge mistake. And from a taxpayer’s perspective it’s outrageous.” (ref)

“Dr. Marty Makary is co-developer of the life-saving checklist outlined in Atul Gawande’s bestselling The Checklist Manifesto. As a busy surgeon who has worked in many of the best hospitals in the nation, he can testify to the amazing power of modern medicine to cure. But he’s also been a witness to a medical culture that routinely leaves surgical sponges inside patients, amputates the wrong limbs, and overdoses children because of sloppy handwriting. Over the last ten years, neither error rates nor costs have come down, despite scientific progress and efforts to curb expenses. Why? To patients, the healthcare system is a black box. Doctors and hospitals are unaccountable, and the lack of transparency leaves both bad doctors and systemic flaws unchecked. Patients need to know more of what healthcare workers know, so they can make informed choices. Accountability in healthcare would expose dangerous doctors, reward good performance, and force positive change nationally, using the power of the free market. Unaccountable is a powerful, no-nonsense, non-partisan diagnosis for healing our hospitals and reforming our broken healthcare system.” (ref)

Provonost also argues for more transparency, sharing, and cooperation in healthcare:

“If health care professionals are going to move away from a system of competition to one of cooperation, we must be humble enough to acknowledge our shortcomings and support one another in a path towards improvement.” (ref)

Open source is another component for increasing transparency with the aim of improving healthcare:

“While health care providers should step up and lead the effort to design safer systems, health care IT companies can support such efforts by agreeing to share their data on an open-source platform that allows medical devices to communicate with one another. They can also remove barriers that impede clinicians from writing life-saving, cost-reducing, patient experience-enhancing applications.” (ibid)

Warwick and Berwick (who helped improve patient survival of cystic fibrosis and are described in another Atul Gawande article on what we can learn when healthcare data is shared) summed it up:

“To fix medicine, Berwick maintained, we need to do two things: measure ourselves and be more open about what we are doing.”

Compare this to the argument by Lee Shulman and others from the Carnegie Foundation for the Advancement of Teaching for a “teaching commons” – a space for faculty to share and discuss their teaching practices openly, as a first step toward improving teaching effectiveness:

“making university teaching “community property” or an intensely collegial and research-informed process in higher education” (ref)

The Carnegie Foundation for the Advancement of Teaching is now actively testing “improvement science” strategies for tackling hard problems in teaching, based in part on the Institute for Healthcare Improvement‘s work.

Beyond Evidence-Based Practice

Evidence-based practice (or research-based practice) is another common argument for improving both healthcare and teaching, but there are some cautions.  One is that practitioners need to be able to understand, interpret, and critique research, as well as conduct their own research.  Another problem is the limited generalizability of so much research out there.  In education, for example, there are several educational books out there now that are based on research that  “includes participants who have no specific interest in learning the domain involved and who are also given a very short study time”(ref), with tips largely focused on low level rote learning and memorization, instead of classroom-based and design-based research that focuses on improving learning rather than just comparing A vs. B for a statistically significant difference.  Books and articles are also usually several years out of date by the time they are published:

“The buzzword for clinicians these days is “evidence-based practice”—good doctors are supposed to follow research findings rather than their own intuition or ad-hoc experimentation. Yet Warwick is almost contemptuous of established findings. National clinical guidelines for care are, he says, “a record of the past, and little more—they should have an expiration date.” (ref)

The Need for Empathy, Persistence, and Creativity

“We are used to thinking that a doctor’s ability depends mainly on science and skill. The lesson from Minneapolis is that these may be the easiest parts of care. Even doctors with great knowledge and technical skill can have mediocre results; more nebulous factors like aggressiveness and consistency and ingenuity can matter enormously. (ibid)

“Warwick’s combination of focus, aggressiveness, and inventiveness is what makes him extraordinary. He thinks hard about his patients, he pushes them, and he does not hesitate to improvise” (ibid)

Compare that with Michael Wesch’s article “Why Good Classes Fail“:

“the common thread I see throughout all the failures is quite simply a lack of empathy. There is no authentic encounter with students, or what Martin Buber called “a genuine meeting.” When we use all the right methods, and we still fail, it is most likely because we are encountering our students as objects and not as the rich and complex individuals that they are.”

In Defense of Active Learning

In one of the most recent opinion pieces defending the value of lectures (written no doubt as a counter reaction to the recent meta-analysis showing the benefits of active learning over lecturing), I commented without critiquing lecturing itself.  It’s not about if a professor that lectures is “good” or “bad.”  But how can we evolve from a good/bad mindset to a more engineering mindset of “better”?  What if we focused on improving student learning and engagement and topping one’s own teaching effectiveness rather than going with our gut and what “feels” like it works just fine?  Let’s just say the response I usually get from anonymous online commenters (illustrating the online disinhibition effect) makes me want to go wash my hands.



Posted in education, learning sciences, teaching
%d bloggers like this: