Why do some people see faces in potato chips, pancakes, smoke, clouds, and so on?
Two theories are posited in this NYT article on face recognition, by Elizabeth Svoboda:
1. The brain has specific face-processing areas — “groups of cells in three regions of the brain’s temporal lobe” — which give priority to processing faces over non-face objects. We seem to be wired for identifying faces particularly — except for people with certain genetic or injury-induced brain damage. It turns out that there are about 12 visual relationships that humans (and monkeys) use to match objects against to determine if they’re faces — and a computer can learn to do the same thing: “As the computer amassed the information, it was able to discover relationships that were of great significance to almost all faces, but very few nonfaces. ‘These turn out to be very simple relationships, things like the eyes are always darker than the forehead, and the mouth is darker than the cheeks. … If you put together about 12 of these relationships, you get a template that you can use to locate a face.'” It’s not a matter of processing discrete features, like noses and eyes, but rather it’s likely that “the human brain processes faces holistically, like coherent landscapes, rather than one feature at a time.” So blurriness and other distortions don’t much affect our success at face recognition.
2. And/or, we may see so many human faces each day that we see human faces in places where they’re not (like a potato chip) just because we are so used to seeing faces. Apparently, “after the brain is bombarded with a stimulus, it continues to perceive that stimulus even when it is not present. … Because faces make up such a significant part of the visual backdrop of life, … [it may be that] people have gotten so used to seeing faces everywhere that sensitivity to them is high enough to produce constant false positives.” This seems an unlikely explanation to me, as I would guess that most of us see many more non-face objects each day than we see faces; but, perhaps because the faces may be more significant to us (emotionally or evolutionarily, e.g.), the brain may record faces more strongly than other objects. Like that Gary Larsen cartoon where the dog hears only its name while its owner is speaking to it (“blah blah Ginger blah blah blah blah blah blah blah blah blah Ginger”), perhaps because we register faces more strongly (if we do), our brain sees “object object Face object object object object object Face,” minimising other objects in relationship to faces, so that the effect is of being ‘bombarded’ with faces even though other objects may be the more prevalent potential stimuli.