Since at least the invention of BASIC and Logo in the 1960s, people, such as Seymour Papert, have made an argument that anyone can and should learn to how to program, and even make their own software applications. The argument is that we should think of it as a new literacy, a 4th “R” of sorts – computational thinking, or multimedia authoring, or just simply, programming. For me, it’s about knowing more than just how to make a powerpoint presentation or a web page, for example, and learning how to make an interactive animation or game or simulation and so forth. I’ve blogged about it before (Programming: The New Literacy) and wrote a chapter on the topic (Toward a Nation of Educoders).
The most recent work in this area is a short book by Douglas Rushkoff titled Program or Be Programmed, and it was originally a short talk (here’s the video, along with a newer video made after the book).
So I went into the book expecting to agree with most of the points. I’ve only gotten through the beginning pages so far and I do agree with many points, but there are also some problematic ones, especially relating to Rushkoff’s ideas about education and learning (which is not the focus of the book), and philosophy of technology.
First here are some quotes from the early part of the book to get an idea, including some relating to education:
p.7 “When human beings acquired language, we learned not justhow to listen but how to speak. When we gained literacy, we learned not just how to read but how to write. And as we move into an increasingly digital reality, we must learn not just howto use programs but how to make them. In the emerging, highly programmed landscape ahead,you will either create the software or you will be the software. It’s really that simple: Program, or be programmed. Choose theformer, and you gain access to the control panel of civilization.”
p.8 “the people programmingthem take on an increasingly important role in shaping our world and how it works”
pp. 12-13 “The Axial Age invention of the twenty-two-letteralphabet did not lead to a society of literate Israelite readers,but a society of hearers, who would gather in the town squareto listen to the Torah scroll read to them by a rabbi. Yes, it wasbetter than being ignorant slaves, but it was a result far short ofthe medium’s real potential.Likewise, the invention of the printing press in theRenaissance led not to a society of writers but one of readers”
p. 13 “Computers and networks finally offer us the ability towrite. And we do write with them on our websites, blogs,and social networks. But the underlying capability of thecomputer era is actually programming—which almost none of us knows how to do.”
So far so good. Now on education:
p. 15 “elementary school boards adopt “laptop” curriculums lessbecause they believe that they’ll teach better than because they fear their students will miss out on something if theydon’t.”
This sounds like the same argument Larry Cuban made before (see this post), that schools only get educational technology and software to be “hip” and “with the times.”
The book seems to not be based on any research-guided understanding of how people learn. It’s very centered on a model of a disembodied brain controlling our behavior (see my previous post on embodied cognition), and he also seems to share Nicholas Carr’s assertion that technologies like Google are making us stupid:
“Our brains adapt to different situations.” “The outsourcing of our memory to machines expands the amount of data to which we have access, but degrades our brain’s own ability to remember things.”
A recent article surveyed numerous scholars and the majority of them thought Nicholas Carr was wrong – Google and similar tools are helping make us smarter.
He also completely buys into the digital natives vs. digital immigrants idea (refuted by many), including the idea that the brains of digital natives are “wired” differently:
p.32 “A brain learning on computers ends up wired differently than a brain learning on textbooks.”
Rushkoff argues that technology is a part of us and an extension of us, and yet he somehow simultaneously argues that we shouldn’t stay connected with technology:
p. 37 “She is already violating the first command by maintaining an “always on” relationship to her devices and networks.“
Rushkoff keeps mentioning the Torah and religion over and over again, and the role of technologies/media in shaping religion (sort of an extreme version of Eric Havelock’s Preface to Plato and others’ ideas I guess (like most of these types of books & journalism, there are little to no citations).
He is also probably of the belief that online education is inherently inferior to face to face education (which is not the case, see these meta-analyses and other potential misconceptions about learning and technology):
pp.41-42 “But those back-and-forth exchanges are occurring at a distance. They are better than nothing—particularly for people in unique situations—but they are not a replacement for real interaction.”
Basically, I’ve seen this type of book so many times I can’t count. It’s a book about some new X, from the perspective of some person who has never done X, doesn’t like X, or was born long before X. X might be video games, it might be open education, it might be embodied cognition, distance learning, educational technology, open access research and scholarship, web 2.0, etc.
Students in my advanced instructional design course (login as guest) created some narrated presentations in VoiceThread at the end of the semester. They are on topics related to faculty development, teaching and learning, multimedia, etc.:
- Service Learning
- How People Learn framework
- Backward Design
- Threshold concepts
- Animations vs. Diagrams
- Multimedia learning
The end of the presentations have links to more resources, but apparently voicethread doesn’t make the URLs clickable in a slide. Comment here and I can post the links in a clickable format.
I had some thoughts for a new open educational journal last week, especially in light of the discontinuation of the Innovate journal. I even tested out Google Knol as a hosting platform. It allows for open peer review and more transparent interactions, plus it has zero costs and zero institutional obligations. Some medical journals are starting to use it for quicker research dissemination. Here is a fake google knol journal I created to test it out, feel free to play with it or contact me if you want moderator privileges.
Anyway, while I was conked out after a dental procedure this morning some education/edtech folks on twitter started discussing creating a new journal so I thought I’d share my own notes. Perhaps the anesthetics still haven’t worn off. My notes are here in a Google Doc that anyone can view and edit:
I welcome any feedback or thoughts. I do have some more notes & thoughts on procedural issues, like editor guidelines and decisions, author guidelines, etc. This best practices guidelines from DOAJ on creating new open access journals is very helpful.
When I started this blog 8 years ago, it was described as ‘eclectic’. Part of that is because, like most blogs, it is a slow form of stream of conscious, blogging about stuff that interests me. But also as a researcher you look for theoretical connections between things that on the surface may look very different. One such connection that has been a focus of some of my research is the application of embodied cognition research and theory to explain various anomalies in educational research and new techniques for instruction and educational technology, as described in a recent post about an upcoming AERA symposium on embodied cognition and education I am organizing.
For example, researchers have found that attending to student gestures or using gestures while explaining concepts or procedures (for example in a math class) helps student understanding, and also having students interact with and physically manipulate models (such as acting out a story or physically manipulating a simulation) helps student reading comprehension or physics understanding.
But the first time I across this connection between embodied cognition and learning was 15 years ago when working on an undergraduate thesis about physics misconceptions (“intuitive physics”). I wrote a review of research on the area. This is actually a very broad area, so it ended up being a massive task for me to review it. Well, massive for an undergraduate.
The specific anomaly that I came across involved a test question about dropping an object from a plane. In this problem you see a diagram or animation of an airplane traveling from left to right. The airplane drops an object, say a heavy ball or box or bomb or whatever. The task is to draw or identify the “correct” path the object takes as it falls to the ground. The student is usually supposed to ignore air resistance, but that doesn’t really make much difference to student answers. See the crude diagram below:
The misconception (identified by Michael McCloskey and other researchers in the 1980s) is that a significant percentage of students think the object drops straight down. In fact it follows a path like the solid line in the diagram. It didn’t matter the form of the question, be it a diagram or animation, the misconception is still seen. The theory, or explanation, for this misconception was that this is a visual perception error or illusion based on our past visual experience. From the perspective of the airplane (imagine you are a bomber in the plane), neglecting air resistance, the object would appear to fall straight down from your perspective.
What is the anomaly in this research? In the animated form of this problem, students watch the object fall along one of these paths (both correct and incorrect) and then are asked to draw the path that the object took. In all the drawings of students with the misconception, they drew the object as falling “behind” the actual path it took. So for example, after the “correct” animation, students drew the object as falling straight down, and after the ‘straight down’ animation, some students even drew the object as falling backwards (moving to the left). The anomaly, however, is that when students were shown an animation of the object falling ahead of the plane (the red dotted path above), students drew the path correctly, with no misconception. Suddenly, they “saw” the path correctly.
You could still explain this with the visual perception theory by adding a new constraint. Perhaps objects that move ahead of another object are visually segregated from the other object and no longer perceived from the perspective of the other object. Another theory though is that in this case the object appears to “shoot out” from or be “thrown” from the plane, and not just passively dropped. This is similar to how we drop wadded up paper balls into trash cans, for example. We don’t walk by the trash can and passively drop it, calculating the relative velocity and height to get it in the trash can. We throw it in the can. We actively control it.
This research is from the 1980s, before embodied cognition became known, but there were (and are) perceptual-motor theories of perception that can perhaps better explain this and related phenomenon, such as for example the motor theory of speech perception. These theories revolve around the idea that what and how we visually (and aurally) perceive are connected to our actions and embodied capabilities and embodied experience, not just purely visual experience. For example the McGurk effect: you watch a video of a person visually speaking one thing, but the audio is of something similar but different. We tend to hear what the person is visually pronouncing with their mouth, or at least our auditory perception by what we see the person physically saying.
A second physics education example is perhaps a little more clear. This is the curved tube problem, shown below:
A ball travels through a curved tube and exits out the other end. The misconception is that some students believe the ball will continue to travel in a curvilinear path after it exits the tube. From Newtonian physics, we know the ball show travel in a straight line absent external forces. This misconception can’t be explained as a purely visual perception error. We have no visual perceptual experience or perspective that would explain this misconception. But from motor experience, we have experience controlling the motion of objects with our actions. We sometimes believe we can continue to control or influence them even after contact. Such as leaning your head or body when playing a videogame or playing pool, or there is the classic video/photo of the hitter waving his arms to make a baseball stay fair:
And indeed when participants were given another version of the curved tube in which they manually controlled the ball (or in this case, a puck), there was evidence for this. A curved path was drawn on top of a table. Participants were to push or “shoot” a puck through this path without it touching the lines. You could do this by pushing it diagonally through the curved path, but many “wound up” the puck by moving their arm and hand in a curved path before releasing the puck, with hopes it would continue curving through the path.
A third example of the role of embodiment in physics conceptions is from what was called microcomputer-based labs (MBL). This is when sensors (such as optical distance sensors) are combined with computers to allow things like pushing a car back and forth along a track, and the computer graphs its motion in real-time (position vs. time, or velocity or acceleration versus time). It turns out this is an extremely effective and fast way to help students understand how to interpret graphs of motion. Before, many students have a “graph as picture” misconception. If students are shown a graph like the one below and asked to describe what the car whose motion it describes is doing, many might say that it is a graph of a car going up and over a hill. Actually it is a graph of a car moving faster and then slowing back down to the original speed.
Research has shown that if students can physically manipulate the motion of the car and see the graph change in real-time, they learn the concepts very fast (less than 20 minutes in some studies). If however, the feedback is delayed (by as little as a few seconds) or if students watch a video of the car moving instead of actively controlling it, it becomes much less effective.
In more generally-relevant recent research on animations/videos and diagrams, people are finding that animations and videos depicting dynamic processes aren’t inherently more effective than static diagrams for learning purposes. In fact on average, diagrams have a slight edge. Part of this is because with diagrams you can take your time, explore and revisit different parts of the diagram, “mentally animate” what’s going on, and so forth. When watching a video or animation, it may go too fast for you (or too slow), you might miss or not understand part of it. However, what has been shown to be even more effective than either option (video/animation or static diagrams) are user-controllable diagrams (or animated, controllable simulations). If you let students control the movement of an object, for example, or the changing of a variable, their scores and other measures of understanding are much higher than from passive animations or static diagrams alone.
There are plenty of other examples of the role of embodiment in physics education (like understanding pulley systems), reading education, music education, math education, and other areas of science education (like biology), etc.
I already blogged about this matter 3 years ago in a post entitled “The State of Educational Research & Development.” But a few recent things made me think of it again:
- @newsweek tweeted for us to tell them our thoughts on the American education system in 6 words or less. My first thought was that “education needs more r&d”, because as my previous post mentioned – medicine and engineering and related areas spend 5-10% on research, whereas in education that percentage is closer to 0.01%. And most of that miniscule amount is spent on basic research, not development. NSF doesn’t even fund much K-12 or higher education curriculum or software development anymore.
- The U.S. department of education recently released its National Educational Technology Plan for 2010, authored by many researchers with whom I am familiar. I’ve only scanned it so far, but I haven’t seen much emphasis on development, just research. The only development ideas I’ve seen so far are very top-driven solutions, the “Grand Challenges” described in the end section on R&D: for example a huge tutoring system, a system for delivering assessments, a school data sharing and mining system, etc. I’m not seeing any bottom-driven or domain-specific ideas or more specific solutions. I posted a comment on that page similar to this post.
- Tony Bates blogged about “the state of e-learning in 2009” and noted:
My biggest disappointment this year…has been with open educational resources…what are we getting? Digitally recorded 50 minute classroom lectures and digital textbooks. What we are not getting are materials designed from scratch for multiple use…And there is still so little of it. What I would like to see are many thousands of short modules
- Also as I wrote in another earlier post on “50 examples of the need to improve college teaching,” software is key. The National Center for Academic Transformation (NCAT) has helped people redesign their college courses to be much more effective and efficient and cost less. The key to that is the use of interactive software: “successful course redesign that improves student learning while reducing instructional costs is heavily dependent upon high-quality, interactive learning materials” (ref). That may work well for common, large enrollment courses for which there is already a bunch of software available, but what about the rest of the courses? What about all of K-12, too?
- I recently wrote a chapter titled “Toward a Nation of Educoders” about how if we could make it easier for students and teachers to develop interactive software (such as animations and games and interactive websites), perhaps this would help alleviate this problem, make it less formidable and daunting, financially and timewise. This is related to the “computational thinking” (pdf) and “computational literacy” push seen in computer science. We should look at programming as the new 4th “R”, a new literacy that students and teachers need in today’s world. A couple years ago I blogged about this and an article by Marc Prensky: “Programming: The New Literacy.”
I’ll be giving just one talk at AERA this year, and hosting a symposium session. Both are related to the applications of embodied cognition research and enactivism to education.
- Embodied and Enactive Approaches to Instruction: Implications and Innovations. SIG-Learning Sciences. Scheduled Time: Mon, May 3 – 2:15pm – 3:45pm Building/Room: Sheraton Denver / Governor’s Square 14.
- Discussant: James Paul Gee (Arizona State University)
Chair: Douglas L Holton (Utah State University)
Participant: Dor Abrahamson (University of California – Berkeley)
Participant: Mark Howison (University of California – Berkeley)
Participant: Robert Goldstone (Indiana University)
Participant: David Landy (University of Richmond)
Participant: Qing Li (University of Calgary)
Participant: David Birchfield (Arizona State University)
Participant: Mina Catherine Johnson-Glenberg (ASU)
- The purpose of this session is to explore the implications of enactivism and embodied cognition research for educational design and research, as well as share innovative instructional techniques and learning environments inspired by research on embodiment.
- Discussant: James Paul Gee (Arizona State University)
- Constructivism + Embodied Cognition = Enactivism: Theoretical and Practical Implications for Conceptual Change. SIG-Constructivist Theory, Research, and Practice. Scheduled Time: Sat, May 1 – 2:15pm – 3:45pm, Building/Room: Sheraton Denver / Plaza Court 3.
- Part of the symposium: Theoretical and Practical Frameworks for Understanding Learning
- The objective of this paper is to explore specific theoretical and practical implications of recent research on embodied cognition and enactivism for the design of effective learning environments, especially those targeting conceptual change. The ultimate goal is to illustrate how enactivism and embodiment meet the criteria that often defines scientific progress, and thus can help progress educational research and development and constructivist theory.
There often seems to be a tension between teachers and new technologies. It helps me to step back and think about technology more broadly. Almost 20 years ago I first ran across a book by Don Ihde, philosopher of technology, that still influences my views on the topic. In the book, Technics and Praxis, first published in 1979, Ihde noticed that technologies simultaneously amplify our capabilities (like a telescope extending how far we can see) and reduce our capabilities (a telescope also restricts the field of view). Ihde refers to this amplification/reduction structure as an invariant aspect of all human-technology experience.
So you can look at anything from the point of view of how is it constraining actions and capabilities, yet also amplifying them. That sounds a lot like what we do in education all the time. We are guiding students in ways that may subtly constrain their actions in some ways, yet expand their capabilities in others. Note this is different from a ‘transmission’ view of education, delivering ‘stuff’ (knowledge) to the students, who fill up with that knowledge. Instead, students evolve through education just like athletes get better through practice and training, or like how technologies evolve us as a society. The essential part of education isn’t the content, or stuff, being delivered. As Marshall McLuhan said, “Disregard the content and concentrate upon the effect.” “McLuhan describes the ‘content’ of a medium as a juicy piece of meat carried by the burglar to distract the watchdog of the mind. This means that people tend to focus on the obvious, which is the content, to provide us valuable information, but in the process, we largely miss the structural changes in our affairs [actions/capabilites] that are introduced subtly, or over long periods of time.”1 What Ihde and later researchers might add is that by “concentrate upon the effect,” we are talking about the effect on actions. This is the idea of embodiment: there is no such thing as “knowledge which cannot be represented by actions.” “The content is the audience” (McLuhan).
Teaching is considered many things: a craft, an art, a science, a form of design, etc. It does involve that essential aspect of amplifying and constraining students’ experiences and abilities. Teaching can be viewed as a technology. There are other technologies which aren’t devices and are also just human practices or inventions. Language is a human invention that amplified our capabilities yet also adds new constraints. So technology does not always have to involve a “device” or “computer.” It has to involve constraining and amplifying actions and capabilities. One may argue this weakens or makes too broad the definition of technology, but it’s just one view, and most every term in education has multiple, different viewpoints or definitions.
Perhaps this point of view of looking at teaching as a form of technology would amplify certain aspects of teaching and teachers that have been ignored:
- It might focus people more on the original, complex and unique aspects of teaching that are not matched by computer technologies. The sensitivity to students’ non-verbal and emotional responses, for example, caring for students, etc. Computers are still so so far from ever replacing a real teacher, but that doesn’t mean they have no place in the classroom.
- Computers can be seen as just another technology in an already technology-rich educational experience, a gradual evolution of existing educational practices and technologies: an extension of the teacher, for example, or a replacement for the textbook, not a threat to teachers or a stark change to schooling. Teaching evolves too, with or without the aid of devices and computers. But many of the most popular “new fangled” technologies are merely gradual evolutions of the technologies and practices that have long been a part of schooling: netbooks instead of textbooks, smartboards instead of white boards or chalk boards, virtual field trips instead of or in addition to real field trips, etc.
- Ideally, teachers would be seen more as designers and engineers (as Dewey argued in 1922), and see themselves as engineers and designers (rather than victims of larger forces out of their control), and teachers would be respected more for the complexities and constraints they deal with everyday and the contributions to improving society they accomplish everyday, just like people in other fields. Teaching itself could also be viewed as something that is continually evolving and being refined:
“Herein, I reflect on Dewey’s notion of “education as engineering”. Considering the importance of the use of tools in education, I claim that education could, in one sense, be seen as an engineering science. Engineers are trained in design, especially in artifact design, and in understanding and improving complex systems. They should be trained to understand that humans are also part of the systems that they work with. Thus, approaches and knowledge from the perspective of engineering science and the philosophy of technology can contribute to the understanding and development of education.” (Bernhard, 2009)
- As Ihde argued, we tend to focus on how technologies amplify our capabilities, and ignore how they are simultaneously constraining them (although some look at only the constraints and not amplification). Focusing on both sides of the coin can give us a more balanced view of teaching and technology when considering the effects on students. Also, by focusing on the amplified/reduced actions and uses and effects of technologies rather than just the physical structure of technologies (devices or computers), we might discover better analogies and explanations for better understanding and using educational technologies. For example, Doug Johnson asked us to consider whether we’d make the same arguments for banning pencils from the classroom that we sometimes make for banning cell phones. See also the funny Adventures in Pencil Integration blog, with “one-to-one pencil to student units” and “slate-enhanced learning.”
Regardless, there is still much use and much room for evolving the discussion of our philosophies of teaching, technology, and learning in education. Philosophy of education, technology, and so forth aren’t a done deal that was settled decades or centuries ago. Theories, too, whether implicit or not, amplify certain aspects of how we view the world and constrain or hide other aspects, and they need to evolve as well.
Design is the process of going from function to structure. There is some purpose, or goal, or effect on the environment desired (a function), and structures are created or organized to achieve that function. See more about structure-behavior-function models of systems in this post and more about design generally on this What is Design? page from the Design in the Classroom site. There has been much written about the process, method, or steps of design, including various models and design cycles.
There are of course numerous forms and types of design – engineering design, web design, software design, architectural design, instructional design, visual design, interaction design, and so on. Design Studies is a journal that covers design broadly, and there are numerous journals devoted to domain-specific areas and forms of design, such as Instructional Science and Educational Technology, Research & Development (instructional design), Artificial Intelligence for Engineering Design, Analysis & Manufacturing (engineering design), … There’s even a kids’ show called Design Squad that covers design more broadly as well.
What has not been well researched in virtually all areas of design are the misconceptions or preconceptions and conceptual hurdles people have when designing or learning to design (especially beginners and students, but also experienced designers as well, as will be illustrated below). Misconceptions have been well researched in areas such as science education (see this book for example) and history education (see this book), but not in design areas such as engineering, web design, or even instructional design. Why is this important? The book How People Learn highlights 3 key findings that we’ve learned from research on learning and teaching, and #1 is the need to identify and confront students’ initial knowledge and misconceptions they have when learning a new subject or skill:
“Students come to the classroom with preconceptions about how the world works. If their initial understanding is not engaged, they may fail to grasp the new concepts and information that are taught, or they may learn them for purposes of a test but revert to their preconceptions outside the classroom”
“Teachers need to pay attention to the incomplete understandings, the false beliefs, and the naive renditions of concepts that learners bring with them to a given subject.”
So, with all that in mind, below are just a few examples of misconceptions about design in various domain areas that I’ve found. Like I said, there is not much actual research or data out there, just some anecdotal resources – I welcome any comments with other examples of misconceptions about design, and perhaps it is an area you or I or others will explore further in the future.
Software Design – Anti-Patterns
Actually this is one design area in which there is a huge amount of anecdotal evidence for misconceptions. See this list of anti-patterns, for example. Whereas computer science design patterns are common solutions to problems that occur in software design, an anti-pattern is a commonly used yet often ineffective or counter-productive design pattern. Anti-patterns aren’t all really ‘misconceptions’, but just as noted in the How People Learn quotes above, it makes sense to be aware of anti-patterns so that you don’t repeat the same mistakes many other designers have made.
It would be interesting to see how many of these software design anti-patterns may apply to other forms of design as well. I’m not a computer science teacher nor was I a CS student, but I do not get the sense that anti-patterns (nor design patterns, for that matter) are covered or addressed in software engineering courses. Perhaps they should.
“Engineering design is the process of devising a system, component, or process to meet desired needs. It is a decision-making process (often iterative), in which the basic sciences, mathematics, and the engineering sciences are applied to convert resources optimally to meet these stated needs” (from p. 3 of ABET criteria).
So far I’ve mainly only found one short paper about misconceptions in engineering design. Wendy Newstetter and others surveyed freshmen in an engineering design course (the ACM paper is not publicly accessible), and listed 5 main misconceptions about design:
- Ideation without substance – Students believe that design is coming up with good ideas. And that’s it. They forget about the rest of it – how to realize these ideas and evaluate them.
- Design arrogance – Students forget the constraints of the environment in which the design will reside. They “arrogantly” ignore the constraints of the user.
- Design fixation – (perhaps related to the idea of “functional fixedness” as well) Students tend to focus on the first solution that comes to mind. They stop considering alternatives.
- Extreme design -Students focus only on the very high level (function) or the very low level (structure), without moving between them in a formal manner and considering the giant gulf between the two levels.
- Design serialization – Students belief that design is a serial/linear process, ignoring iterative cycles, revisiting past decisions, and evaluating alternatives.
Instructional Design – ISD
People in the instructional technology/design field will recognize some of the above misconceptions. The first thing I teach in my advanced instructional design course is misconceptions about ADDIE (a popular formal instructional design model). Even the author of the most popular textbook that teaches ADDIE (Walter Dick), stated that students shouldn’t view ADDIE as a linear recipe to be followed (I don’t have the quote with me).
Rieber quotes Michael Streibel (1991, p. 12) about the difference between instructional design models (such as ADDIE) taught in the classroom and instructional design as it is actually practiced:
I first encountered the problematic relationship between plans and situated actions when, after years of trying to follow Gagné’s theory of instructional design, I repeatedly found myself, as an instructional designer, making ad hoc decisions throughout the design and development process. At first, I attributed this discrepancy to my own inexperience as an instructional designer. Later, when I became more experienced, I attributed it to the incompleteness of instructional design theories. Theories were, after all, only robust and mature at the end of a long developmental process, and instructional design theories had a very short history. Lately, however, I have begun to believe that the discrepancy between instructional design theories and instructional design practice will never be resolved because instructional design practice will always be a form of situated activity (i.e., depend on the specific, concrete, and unique circumstances of the project I am working on).
Some other misconceptions I’ve seen:
- Online courses are worse than face to face learning. Some students are adamant about this belief. Obviously there are going to be bad online courses and bad face to face courses, it depends on contextual factors (the teaching, the students, the environment, etc.). But a recent meta-analysis from the U.S. Department of Education actually found that students learned more online than face to face, and the Open Learning Initiative is another example where students learn better and faster with their online statistics course.
- Simulations worse than the real thing. Again, of course there are some things you can only learn from doing the real thing (like some aspects of flying an airplane or fighting in combat). But study after study shows that you learn more and faster from a simulation than the real thing. I don’t have some of the references handy (but see this book chapter on how people learn with simulations I wrote), but it’s obvious why in most cases. A frog dissection simulation shows labels next to the body parts, for example. A flight simulator lets you rapidly re-practice tough techniques and challenging flying conditions. It’s not an either/or choice of course. You don’t want an airplane pilot who has only used a simulation, nor would you want an airplane pilot (unless very experienced) who’s never used a simulation.
- Lecture before simulation/experience. This is the belief that we should lecture students before letting them use a simulation or work on an open-ended problem, because they aren’t ready for the messy experience yet. This is counter-intuitive to most students as well, but the research shows that if you let students explore first, even if they make mistakes, and then lecture on the concepts (rather than the other way around), students will learn much more. Again, if you think about it, it’s not so counter-intuitive. If you get the lecture first, you more likely just tune it out like most lectures. If you get the messy experience first, you’ll start to formulate your own questions and ideas and so forth, and you are more ready and prepared to learn from the lecture that comes afterward.
- Topics should be broken into pieces linearly sequenced, with the learning objectives stated first, and going from simpler to more complex. This is related to the last one and there are numerous writings about this. Putting the learning objectives first isn’t bad, but it isn’t written in stone, either, nor are there learning objectives out there in the real world we are preparing students for. Almost 30 years ago Hermann Härtel wrote about why we still teach physics and so forth in a linear, piece-meal fashion, and problems with that approach. For example, often no connection is made between an electrostatics physics course and an electrical circuits engineering course, even though there are connections between the two. Chabay & Sherwood addressed this issue and Härtel’s philosophy with their Matter & Interactions curriculum and supporting software.
- If I tell it or have them read it, they should know it. This is the way most undergraduate courses still work. Weed out classes. If you didn’t memorize enough things, too bad, you fail. See below for 2 examples of where often times students didn’t even perceive or understand what you presented right in front of their eyes.
- Books and lectures are enough to learn anything – technology is not important, only teaching. As I wrote about in an earlier post on 50 Examples of the Need to Improve College Teaching, the National Center for Academic Transformation found that “Successful course redesign that improves student learning while reducing instructional costs is heavily dependent upon high-quality, interactive learning materials.” That means software. It can help teaching and learning. See the Open Learning Initiative I linked to earlier – one key to their success was the use of interactive statistics software. More and more modern topics are not so easily conveyed in textual or verbal form. And no that doesn’t mean ignoring teachers or teaching. Teaching IS a technology itself, as are books and lecturing and whiteboards.
Web design and instructional design (and other forms of design) appear to be unrelated, but the misconceptions of all them seem to largely stem from not understanding the context of design, which in many cases is the users or people, which means better understanding how people learn and how they think and perceive.
I know of countless ‘mistakes’ or ‘errors’ students make when learning web design and HTML and CSS and so forth (forgetting that closing tag, etc.), but I’ll just mention one possible misconception (or popular myth) I see referred to on even professional web design blogs and sites, although I think there are hundreds of misconceptions, including a whole class of misconceptions about Web 2.0 versus Web 1.0 (web sites aren’t merely “electronic brochures” anymore, for example):
- “Click here” is bad – Yes, having a link that simply says click here is bad. The link needs to include context – what is at the linked site? But the blogs giving web design advice (such as this one) go too far in saying that “click here” should be completely avoided in favor of more abstract terms like “read more”. Using action verbs (like “click”) that tell people what to do IN ADDITION to providing some context leads to greater actual clickthrough rates. There is some research to support that, although I don’t have the reference at hand, but it shouldn’t be surprising – action verbs that tell people what to physically do result in (surprise) them more likely doing it.
Improving Design by Better Understanding How People Learn, Think and Perceive
The above was another example where some basic understanding of research on human learning, cognition, and perception can enhance your design (when the context of that design involves people as it commonly does in most areas). Of course, misconceptions about learning, cognition, and perception is a whole other topic that has not been well researched, but see this previous post on “Powerful Demonstrations about the Nature of Perceiving, Learning and Understanding” for some examples, such as change blindness (where something happens right in front of your eyes but you don’t see it). Just because you put that fact on your powerpoint or you stated something out loud to your students or you put some message on that web page or online course, doesn’t mean the students or your users or audience even perceived it, yet alone understood it or memorized it. We only pay attention to what we think is important, nobody notices the details, nobody reads the instructions.
Understanding misconceptions about design can reciprocally facilitate our understanding of teaching and learning, too, since instructional design is fundamental to education. Despite 30 years of overwhelming research on misconceptions in science education, for example, physics and sciences class teachers still often depend on classroom demonstrations to convey principles. See the article Why may students fail to learn from demonstrations? by Wolff-Michael Roth for an example of why that may not be such an effective instructional design technique. There is a gap (some have called the “valley of death“) between the research and the practice (of the designers, the innovators, the teachers,…), partly due perhaps to our misconceptions about design and implementation, such as “design arrogance” and “ideation without substance.”
The title of this post is meant to be a joke (not a troll). The inventor of cognitive load theory (Sweller) and others labeled problem-based learning and other constructivist and inquiry-based instructional techniques a ‘failure’ in an oft-discussed 2006 paper I posted about earlier (no joke).
Recently the journal Instructional Science published some reflections by Ton de Jong on cognitive load theory itself, identifying some conceptual and methodological problems. Roxana Moreno, who has an edited book on Cognitive Load Theory coming out next year, also published a nice summary of de Jong’s paper in the same journal. So, below is a summary of the summary, plus a few extra things.
What is cognitive load theory?
Cognitive load theory is the idea, first published by Sweller in 1988, that instructional design should focus on not overloading a learner’s mental effort when designing instruction. “Learning is hampered when working memory capacity is exceeded in a learning task” (de Jong, 2009).
The first version of cognitive load theory (1988-1998) had 2 elements, intrinsic and extraneous cognitive load. The latter is “bad” and needs to be reduced because it hurts performance, and the former is out of our control – the intrinsic difficulty of the subject matter being learned:
1. Intrinsic cognitive load – This is driven by element interactivity of the material to be learned. Memorizing something like independent commands has low/no element interactivity, whereas learning how to edit a photo in photoshop is something with high element interactivity (Paas, Renkl, Sweller, 2003). We as instructional designers can’t control the intrinsic difficulty of the subject being learned.
2. Extraneous or ineffective cognitive load – This is the unnecessary information presented during instruction, or making learners do extra work that only delays or detracts from their learning. Such as having to search for information rather than telling them where to find it. Extraneous cognitive load is sometimes just a necessary constraint (like having to turn a page in a book rather than watching a video), but where it really hurts learning is when the element interactivity and instrinsic cognitive load are already high and pushing at one’s working memory limitations. “As a consequence, instructional designs intended to reduce cognitive load are primarily effective when element interactivity is high” (Paas, Renkl, Sweller, 2003).
In 1998, Sweller published a paper that introduced a 3rd element, germane cognitive load. Because of course if you take the first two types to the extreme, it suggests that we as instructional designers should do nothing but spoon feed content to learners, and make learners exert as little mental effort as possible. Of course we know that that higher mental effort is sometimes necessary for learning, and not a bad thing at all – more like ‘hard fun’ and ‘serious play’. An example is learning how to do math calculations by hand before using a calculator or computer to do them. There are plenty of examples where higher cognitive load actually leads to much better learning, which would contradict the original cognitive load theory.
3. Germane cognitive load – This is the on-task mental ‘load’ or activity during learning. This can and should be influenced by the instructional designer. If one defines learning as schema acquisition and building, the ‘germane’ cognitive load is that which contributes to such schema acquisition, and the ‘extraneous’ cognitive load is that which does not.
Conceptual Problems with Cognitive Load Theory
1. Post-hoc explanation. As soon as I first read about germane cognitive load (good) in 1998 vs. extraneous cognitive load (bad), cognitive load theory became unfalsifiable in my opinion. You can justify any experimental result after the fact by labeling stuff that hurts performance as extraneous and the stuff that didn’t as germane. Numerous contradictions of cognitive load theory’s predictions have been found, but with germane cognitive load, they can still be explained away. de Jong does not use this term (unfalsifiable) but instead states that germane cognitive load is a post-hoc explanation with no theoretical basis: “there seems to be no grounds for asserting that processes that lead to (correct) schema acquisition will impose a higher cognitive load than learning processes that do not lead to (correct) schemas” (2009).
2. Can’t distinguish between germane and extraneous cognitive load. Related to the above – one can’t objectively and before the fact tell whether something will be germane or not. Sometimes something that induces extraneous load may also induce germane load and vice versa. The type of load is highly dependent on learner characteristics and learning objectives (Moreno, 2009).
3. Lack of clarity about the cognitive load construct itself. Moreno (2009) describes a lack of clarity about terms such as cognitive load, mental load, and mental effort. Mental load is a subjective rating or experience – it’s not ‘intrinsic’ to the material.
4. Lack of additivity. The assumption of cognitive load theory is that intrinsic, extraneous, and germane cognitive load all add up, and cannot exceed our working memory resources if learning is to occur. As in point 2 above, de Jong thinks that intrinsic cognitive load is different ontologically from the other two types, and “we cannot add apples and oranges” (Moreno, 2009), and Moreno also describes recent studies that refute the additivity hypothesis.
Methodological Problems with Cognitive Load Theory Research
1. No reliable, valid measure of cognitive load. Most people have used a one-item scale of perceived mental effort to measure cognitive load. A one item measure can’t even be analyzed for reliability and validity properly.
2. Poor external validity of lab-based studies. Moreno doesn’t touch on something in the de Jong article – the fact that most cognitive load (and multimedia learning) studies are conducted in labs that “includes participants who have no specific interest in learning the domain involved and who are also given a very short study time” (de Jong, 2009), often only a few minutes. Quite a number of findings from these studies have not held up as strongly when tested in classrooms or real-world scenarios, or have even reversed (such as the modality effect, but see this refutation and this other example of a reverse effect).
3. Ignores, or selectively ignores, other educational and cognitive research. Cognitive load theorists vehemently argue for the basis for their model in cognitive research, and yet ignore quite a huge swath of it. It accepts the information processing view of cognition (most popular in the 1980s) and Baddeley’s model of working memory from the 1970s.
So is cognitive load theory a failure or wrong? Is that important? Like I said, the question is a joke. From one perspective Newton’s laws are wrong and were superseded by Einstein’s theories, but of course Newton’s laws are still quite useful and correct enough for everyday scenarios. The more important question is whether a theory is useful, or is there a better, more useful theory.
Moreno (2009) concludes that cognitive load theory is at an impasse, and dissatisfaction with it is growing. She cites Labaree’s (1998) paper about the learning sciences having to “live with a lesser form of knowledge” than the hard sciences. Lesser because it doesn’t build on existing research, it isn’t subject to direct, reliable and valid measurements across different studies, and is more subject to bias.
I believe an area of future interest should be in exploring how post-cognitive theories may provide more useful explanations for some of the phenomena uncovered by cognitive load theory research. Kaptelinin and Nardi describe some post-cognitive theories in chapter 9 of their book Acting with Technology. Unfortunately, chapter 9 was taken down from the First Monday site for some reason. But the four theories they compare include activity theory, phenomenology, distributed cognition, and actor network theory. They (and I) lean heavily toward activity theory and (embodied) phenomenology, both of which have significant overlap. This is part of a larger phenomenon of researchers exploring post-cognitive models in education, human-computer interaction, and numerous other areas. As the very short cognitive psychology page states on Wikipedia: “The information processing approach to cognitive functioning is currently being questioned by new approaches in psychology, such as dynamical systems, and the embodiment perspective.”
- Wolfgang Schnotz also published an article identifying conceptual problems with cognitive load theory in 2007, I just didn’t have time to re-review it in depth here. Some quotes from the abstract: “Various generalizations of empirical findings become questionable because the theory allows different and contradicting possibilities to explain some empirical results” and: “reduction of cognitive load can sometimes impair learning rather than enhancing it.” Another 2005 article by Schnotz described why reduction of cognitive load can have negative results on learning.
- Moreno also wrote that cognitive load theory might be at an impasse in another article in 2006.
- Given that cognitive load theorists have at times hung some of their work on top of schema theory, evolutionary theory, ACT-R and so forth, one might be interested in the work connecting those theories with embodied cognition and/or activity theory (Vygotsky). The McVee et al. (2005) article Schema Theory Revisited made this connection in the context of reading and literacy instruction. Interestingly, Paivio, inventor of dual-coding theory which many multimedia learning theorists cite, writes a strong rejoinder against this article and all schema theory. McVee responded to that and other critiques.
- Some of the terms cognitive load and multimedia learning researchers use such as “active” and “interactive” can be a bit vague, as well as other terms such as “constructive” and “passive” learning. Chi (2009) recently proposed more operational and precise definitions of these terms in an article titled Active-Constructive-Interactive: A Conceptual Framework for Differentiating Learning Activities.
The National Academies (of science, engineering…) have produced a number of educational books over the past decades, and it has been harder to keep track of them all, so I’m copying descriptions of some recent ones below. The nice thing is that you can read the full text of any of these books online for free. These are very useful for better understanding the problems of STEM education, especially when preparing grant proposals.
How People Learn (1999) examines these findings and their implications for what we teach, how we teach it, and how we assess what our children learn. The book uses exemplary teaching to illustrate how approaches based on what we now know result in in-depth learning. This new knowledge calls into question concepts and practices firmly entrenched in our current education system. How Students Learn: History, Mathematics, and Science in the Classroom (2005) builds on the discoveries detailed in the bestselling How People Learn.
Engineering in K-12 Education (2009) reviews the scope and impact of engineering education today and makes several recommendations to address curriculum, policy, and funding issues. The book also analyzes a number of K-12 engineering curricula in depth and discusses what is known from the cognitive sciences about how children learn engineering-related concepts and skills.
Tech Tally: Approaches to Assessing Technological Literacy (2006) determines the most viable approaches to assessing technological literacy for students, teachers, and out-of-school adults. The book examines opportunities and obstacles to developing scientifically valid and broadly applicable assessment instruments for technological literacy in the three target populations. The book offers findings and 12 related recommendations that address five critical areas: instrument development; research on learning; computer-based assessment methods, framework development, and public perceptions of technology.
Educating the Engineer of 2020 (2005) is grounded by the observations, questions, and conclusions presented in the best-selling book The Engineer of 2020: Visions of Engineering in the New Century. This new book offers recommendations on how to enrich and broaden engineering education so graduates are better prepared to work in a constantly changing global economy. It notes the importance of improving recruitment and retention of students and making the learning experience more meaningful to them. It also discusses the value of considering changes in engineering education in the broader context of enhancing the status of the engineering profession and improving the public understanding of engineering. Although certain basics of engineering will not change in the future, the explosion of knowledge, the global economy, and the way engineers work will reflect an ongoing evolution. If the United States is to maintain its economic leadership and be able to sustain its share of high-technology jobs, it must prepare for this wave of change.
What is science for a child? How do children learn about science and how to do science? Drawing on a vast array of work from neuroscience to classroom observation, Taking Science to School (2007) provides a comprehensive picture of what we know about teaching and learning science from kindergarten through eighth grade. By looking at a broad range of questions, this book provides a basic foundation for guiding science teaching and supporting students in their learning.
Learning Science in Informal Environments (2009) draws together disparate literatures, synthesizes the state of knowledge, and articulates a common framework for the next generation of research on learning science in informal environments across a life span. Contributors include recognized experts in a range of disciplines–research and evaluation, exhibit designers, program developers, and educators. They also have experience in a range of settings–museums, after-school programs, science and technology centers, media enterprises, aquariums, zoos, state parks, and botanical gardens.
Learning to Think Spatially: GIS as a Support System in the K-12 Curriculum (2006). Spatial thinking is a cognitive skill that can be used in everyday life, the workplace, and science to structure problems, find answers, and express solutions using the properties of space. It can be learned and taught formally to students using appropriately designed tools, technologies, and curricula. This report explains the nature and functions of spatial thinking and shows how spatial thinking can be supported across the K-12 curriculum through the development of appropriate support systems.
Knowing What Students Know (2001) essentially explains how expanding knowledge in the scientific fields of human learning and educational measurement can form the foundations of an improved approach to assessment. These advances suggest ways that the targets of assessment-what students know and how well they know it-as well as the methods used to make inferences about student learning can be made more valid and instructionally useful. Principles for designing and using these new kinds of assessments are presented, and examples are used to illustrate the principles. Implications for policy, practice, and research are also explored.
Technically Speaking (2002) provides a blueprint for bringing us all up to speed on the role of technology in our society, including understanding such distinctions as technology versus science and technological literacy versus technical competence. It clearly and decisively explains what it means to be a technologically-literate citizen. The book goes on to explore the context of technological literacy the social, historical, political, and educational environments.
Relying on a comprehensive review of the research, Mathematics Learning in Early Childhood (2009) lays out the critical areas that should be the focus of young children’s early mathematics education, explores the extent to which they are currently being incorporated in early childhood settings, and identifies the changes needed to improve the quality of mathematics experiences for young children. This book serves as a call to action to improve the state of early childhood mathematics. It will be especially useful for policy makers and practitioners-those who work directly with children and their families in shaping the policies that affect the education of young children.
America’s Lab Report: Investigations in High School Science (2005). Laboratory experiences as a part of most U.S. high science curricula have been taken for granted for decades, but they have rarely been carefully examined. What do they contribute to science learning? What can they contribute to science learning? What is the current status of labs in our nation s high schools as a context for learning science? This book looks at a range of questions about how laboratory experiences fit into U.S. high schools.