The hidden side of politics

Why We See the Colors of Faces Differently Than Other Things

Reported by WIRED:

As set-ups go for studying how people see colors, this one isn’t even the weirdest: a room full of assorted objects, like Lego bricks, strawberries, and ping-pong balls. Bring people into the room and give them a computer. Tell them to use a mouse to adjust the color of a big spot on the screen, like a color-picker tool in reverse.

Then a researcher would point at one of the objects and say, basically, make that spot on the computer be the same color. Easy, right? The yellow Lego, the red strawberry, the white ping-pong ball. That’s what color vision is for after all. It uses the photoreceptors at the back of your eyeballs and a lot of computational neurocircuitry to come up with a representation of the wavelengths of light that a given surface reflects. Good information to have.

Except … well, you knew there was going to be a trick here, and in fact there are two. First, under white light, sure, no problem. The human brain is set up to interpret the way things reflect under white light as a nominally true description of their intrinsic color. Is that a real thing? Talk to a philosopher. But this team was more interested in how things appeared under low-pressure sodium lights, like what cities used to use for streetlights before LEDs. They cast a light some people might describe as “yellow,” but what they actually do is illuminate monochromatically. You see the world not in color, but in a yellow version of greyscale.

Under that light, the objects lost their colors. Just about every subject identified the things they were looking at as some version of yellow, amber, or brown (which is really just dark yellow).

Now, the other thing: In addition to recognizable objects, the researchers also brought people into the room. Four women, actresses—two of them of European descent, which is to say, under white light people with color-normal vision identify their skin as being on the lighter side of the distribution, with more pinks and cream tones. And two of them were of African descent, so under white light most people identify their skin as being darker, with more browns and deeper reddish shades. Are the distributions wide? Yes. Is this racially fraught? Probably. But stick with me, because then the researchers switched from white light to low-pressure sodium.

And everyone described the faces of all four humans as green.

“We were expecting to see that people match fruit to have a little subtle tinge of typical fruit color, and skin to have a typical tinge of typical skin color,” says Bevil Conway, a color vision researcher at NIH and the lead author on a new paper about this research. But, he says, they didn’t see any of those effects. Everything looked yellowish but skin, which looked “weird and green.” Why weird? Because nothing—not the light, not the skin, not the room—had any green in it. It was all in the brains of the people who saw it. It’s an outcome that says something not only about how human color vision works (a mystery scientists have been chipping away at since Aristotle) but why color vision works. Not just what it does, but what it’s for.

Human color vision is very good at figuring out what color something is, regardless of the color of the light shining onto it. In fact, we’re so good at this, an ability called color constancy, that we tend to only notice it when it fails, as in the case of the Dress that some people saw as blue and others as white. After years of working on that memetic crisis, most color researchers now think it had something to do with what color people assumed the illuminance, the ambient light, to be. Late-afternoon bluish? You think it’s white. Mid-day white? It’s blue.

But no one’s really sure how people’s brains make those assumptions and do that calculation. One idea is called memory color, and it says that maybe people’s brains have a kind of database of the colors of certain objects—a strawberry is red, grass is green. So when those objects appear to be some other color, your brain assumes the lighting is quirky, corrects for that so you “see” the strawberry as red, and then applies that same correction to everything else, too. But when the light source is unknown, “you would have to eliminate that bias to say what the objects look like under white light,” says Anya Hurlbert, a neuroscientist at the University of Newcastle upon Tyne, and a peer reviewer of the new paper. Memory color is a good hypothesis, and it’ll probably sound familiar to photographers under a different name: white balance. “It makes a lot of sense in terms of psychology,” Hurlbert says, “and it’s an idea that computer vision scientists also latched onto and have used extensively in algorithms for color constancy, for correcting color bias in images.”

Another idea, one that computer vision algorithms also use, is balancing toward skin color. Now, on the one hand, that seems nuts, because human skin comes in so many colors. But in fact, it doesn’t—you have a range from pink to brown, with some yellowish hues, but plotted in a color space that accounts for hue, saturation, and chroma, human skin color actually clusters pretty tightly compared to the overall range of possible colors. (In fact, that range is smaller than you think; in studies like this one people tend to report seeing Caucasian faces as lighter and African-descended faces as darker than they actually are, colorimetrically speaking.)

The new paper suggests that faces are important to the way people see color, too. The fact that people saw the faces as green under low-pressure sodium when they were not, in fact, green suggests that the human visual system processes faces differently than anything else. The research subjects actually saw isolated patches of skin not on the faces of the actresses as the colors you’d predict. “It doesn’t prove we’re using face skin color for color constancy,” Hurlbert says, “but it does prove we see face skin color differently from other objects, so we might indeed be more likely to use face skin color.”

Two hundred million years ago, birds and reptiles had superpowers, with four light sensors in their eyes tuned to cover a spectrum, from reds all the way to ultraviolet. But our wussy little furball ancestors were nocturnal, and colorblind. Two photoreceptors for color—that’s it.

Then, about 65 million years ago, one mammalian line—the Old World primates—hit a mutational jackpot. They reacquired a third photoreceptor. Our great-to-the-nth grandmonkeys had color vision as we know it. Red alert, nature’s first green is gold, no reason to be yellow-bellied, nothing but blue skies ahead.

But why? Maybe it was fruit. The frugivory hypothesis says that color vision confers an advantage when you’re a cute little monkey looking for ripe fruit against a background of green vegetation. (The related folivory hypothesis says they’re looking for young, ripe, bright green leaves.) But face facts: There’s another hypothesis. Color vision also confers an advantage if it’s useful to know when someone is blushing, or pale—to be able to read emotional change. Conway says this new work leans toward that emotional direction, that behavior is a key to color vision and that color gives us primates information not just about how things in the world look but how we feel about them. You can recognize objects without color—a black-and-white picture of your grandparents still looks like your grandparents. “You don’t need color vision to tell what a banana is. You need color vision to tell whether you care,” Conway says. “Luminance cues tell you about identity, and color cues tell you about behavioral state.”

That doesn’t explain why the faces turned green, though. Some of the answer might be straight-up neurophysiological. The human retina has a lot of complicated wiring, but just three kinds of receptors for light, each peaking at different wavelengths of the visible spectrum—long, medium, and short. The neural wiring of the retina and brain do a lot of computation to turn those inputs into the millions of colors we see, but a short version of the most popular hypothesis is that differential responses in the long-wavelength receptors and medium-wavelength ones tell you where the color of something falls along an axis from red to blue-green, roughly.

Now, face color is a mixture of a lot of ingredients—the pigment melanin, the amount and type of fat beneath the skin, and blood flow, among others. But most of those things don’t vary over short timescales. Blood, a thing that increases redness, does. So it seems like that red-to-greenish axis, the one that the long-wavelength and medium-wavelength receptors code for, is really good at detecting changes in blood flow. “What’s informing your memory about faces, or the prior you’ve built in your head about face color, is very heavily weighted to the aspects of skin color that are dynamic, that change, that tell you about state. How healthy you are, or how sad, or angry, or sick, or whatever,” Conway says. “That becomes your white point of where people should be.”

Someone might be “green with envy” when they’re jealous, or “green about the gills” when they’re nauseated. But if you measure the spectra of light reflecting off their faces, you won’t find any green. Hypothetically, their faces have only become less red, and the human visual system interprets that as “green.” That’s what Conway thinks happens to faces under low-pressure sodium. Denied the baseline chromatic cues, a dedicated channel in the brain—that deals not only with colors and not only with faces but specifically with the colors of faces—knows something has gone wrong, and sees a zombie. The colors of faces are so important to the overall way humans perceive the world that when something upends that perceptual channel, our brains call a red alert. Or maybe a green alert

Hurlbert says she isn’t sure that’s exactly right, but it’s still an important result. “The point is, something is different in our memory color for faces from our memory color for other objects in the room,” she says. “When that signal is completely destroyed, it’s atypical, so we put that sense of atypicality, of abnormality, onto the face, and that’s why we see it as green.

Color scientists still don’t really know how the brain sees color. They tend to think the brain must have “color centers” somewhere in the visual cortex, and neurons that code for specific colors or for changes along specific axes. But by linking color to face perception (among other ideas), Conway’s suggesting a whole other approach. Get it right, and the lighting in hospitals could reveal more about a person’s health, or lights and machine vision algorithms could deal with human variation better. Putting people into a box to look at things under different colored lights might do more than just explain the mysteries of color in the brain—it could solve some of the mysteries of color in culture, too.


More Great WIRED Stories

Source:WIRED

Share

FOLLOW @ NATIONAL HILL