Agnosia and Misperceptions: Filling in the Blanks with Top-Down Processing
Recently the neuroscience field lost a mentor and brilliant clinical observer, Dr. Oliver Sacks. Some of you may have read one of his early books, The Man Who Mistook His Wife for a Hat. In that book he described, among other neurological syndromes, a severe type of visual agnosia – prosopagnosia. This condition is characterized by the inability to recognize familiar faces and is often seen after a right hemisphere stroke. For many of us in the rehabilitation field, understanding perceptual disorders is mind-boggling. How could a familiar face look like a hat of all things? Even more interesting is that often this problem is temporary, with many right hemisphere injured patients being able to recognize faces after a short recovery period.
How We Study Perception
For years we tried to understand perception by carefully describing patients like the title case study in Dr. Sack’s book. A neurologist colleague of mine described that approach to understanding the brain as similar to understanding how a television works from a pile of broken pieces.
Since the human brain is more complex than a television, it is fortunate we have new ways to study perception. Electrophysiological measures, fMRI, and diffusion tensor imaging all provide a window into the brain of typical persons and have helped us understand this very complex process. With this understanding, we can devise better therapeutic approaches to help patients recover the lost functions.
Misperceptions Are Common
To get started with the neuroscience of perception, we can look at a common experience. Have you ever awakened in a reasonably dark, yet familiar room only to notice an unfamiliar object? It happened to me not so long ago.
I awakened and saw what I thought was an animal on my dresser. The only possibility would have been my pet cat, but this animal appeared larger and upright, like a standing dog or monkey. I knew both were impossible, but I just couldn’t figure out what it was.
The familiar dresser lamp was clearly visible at the other end, and I could not think of what I might have placed on the other end of the dresser that could appear that way. Of course I finally turned on the light and realized that what I had been viewing was a shirt I had tossed there prior to going to bed. The shirt had been draped over a new box of tissue, which my husband placed on the dresser earlier that day.
Misperceptions can be caused by stimulus degradation
Neuroscientists would tell me there are probably a few explanations for my misperception that night. The visual image was degraded by my reduced acuity night vision – degrading a stimulus, whether auditory, visual or tactile will often lead to misperception.
Visual degradation happens by:
- Making the image cloudy by putting it behind a distorting lens
- Removing portions of the object (as would happen if there was another object blocking the view)
- Visualizing the object from an unfamiliar viewpoint, like seeing a face upside down.
Auditory degradation happens by:
- Introducing noise that is louder than the stimulus itself
- Passing the sound through a filter that distorts it
But for all of us, even when a sensory stimulus is degraded, we can often use our previous experience or knowledge to figure out what it is we are trying to perceive. This is referred to as top-down processing.
Top-Down Processing Helps Us Fill In the Blanks
Neuroscientists can actually view typical adults using regions of their frontal lobes – not part of our sensory or perceptual brain apparatus – to perceive images that are unclear, less familiar, too far away or otherwise degraded. Hence, I could make out the dresser lamp at one end of the table even though the dim light distorted it because I knew what it was. My frontal lobe was no help in figuring out the other object because I did not know about the tissue box and probably forgot about tossing my shirt on the dresser. So my frontal lobe could not help me reconstruct the image from past knowledge.
This is important because by some estimates up to 80 or 90% of perception involves top-down processing. We use “perceptual fill-in” all the time to recognize people when we see only part of their faces. We can perceive language in a very noisy room or recognize a familiar melody from only the first few notes. The key to top-down processing is past knowledge and experience.
So, why would a person be unable to recognize a familiar face after a stroke? We know from neuroscience research that there is an association region in the posterior-inferior occipital lobe that becomes mapped for familiar faces. It is referred to as the facial fusiform gyrus. When that area is damaged it can cause a prosopagnosia. So, how can a person with damage to the facial fusiform gyrus recover that function over time? Right now, it appears that one key to recovery is top-down processing.
Top-Down Processing in Therapy
Like Dr. Sacks, I also had a patient with severe prosopagnosia lasting for several months after a right-sided stroke. She too could not recognize her husband if he was sitting in a room with other adult men. But even in the early stages, she could recognize him and other family members as soon as they spoke or if she thought about what they had been wearing that day.
In other words, she used her top-down knowledge to help her fill in for the apparent facial gestalt that she could no longer access. After a short period of therapy she learned to use facial details like hair color, nose size, eye color and other visual and auditory cues to help her reconstruct the face. Eight months after her stroke, she indicated that using these other cues brought clarity to familiar faces. She had apparently used the cues to reconstruct the face in her mind.
Many patients with right hemisphere brain injury see details of an image well, but have problems with the gestalt. The technical name for that is simultagnosia, which is the inability to see several details simultaneously. Whether a patient exhibits a prosopagnosia or another type of simultagnosia, helping them to use their top-down knowledge to first recognize their problem and then reconstruct the whole from the parts is often a very helpful therapeutic approach.
- Bastos, et al. (2015) Visual Areas Exert Feedforward and Feedback Influences through Distinct Frequency Channels. Neuron,85:1
- Berry, et al (2010) The Influence of Perceptual Training on Working Memory in Older Adults. PLOS One
- Burns, M (2004) Clinical Management of the Agnosias. Topics in Stroke Rehabiliation.
- Damasio, A.R., Tranel, D., and Rizzo, M. (2000) “Disorders of Visual Processing.” In Mesulam, M.Marsel (ED) Principles of Behavioral and Cognitive Neurology. New York: Oxford University Press.
- DeHaene, S. (2009) Reading in the Brain. Viking Press
- Gomez et. al. (2014) Functionally Defined White Matter Reveals Segregated Pathways in Human Ventral Temporal Cortex Associated with Category-Specific Processing. Neuron, 12.27.
- Kahnt et al (2011) Perceptual Learning and Decision-Making in Human Medial Frontal Cortex. Neuron 70, 549–559, May 12
- Lederman, S. & Klatsky, R. (2009) Haptic Perception. Attention, Perception, & Psychophysics 71 (7), 1439-1459
- Lezak, M.D., Howieson, D.B., Bigler, E.D., & Tranel, D. (2012). Neuropsychological Assessment, Fifth Edition. New York, NY: Oxford University Press.
- Lev, M., Ludwig, K. & Gilaie-Dotan et al (2014) Training Improves Visual Processing speed and generalizes to untrained functions. Scientific Reports
- Mesulam, M.-Marsel (2000) “Behavioral Neuroanatomy.” In Mesulam, M.Marsel (ED) Principles of Behavioral and Cognitive Neurology. New York: Oxford University Press.
- Wu, X. et al (2015) Attentional modulations of the early and later stages of the neural processing of visual completion. SCIENTIFIC REPORTS 5 : 8346, February 10.