We live in a world that’s increasingly controlled by what might be called “the algorithmic gaze.” As we cede more decision-making power to machines in domains like health care, transportation, and security, the world as seen by computers becomes the dominant reality. If a facial recognition system doesn’t recognize the color of your skin, for example, it won’t acknowledge your existence. If a self-driving car can’t see you walk across the road, it’ll drive right through you. That’s the algorithmic gaze in action.
This sort of slow-burning structural change can be difficult to comprehend. But as is so often the case with societal shifts, artists are leaping headfirst into the epistemological fray. One of the best of these is Tom White, a lecturer in computational design at the University of Wellington in New Zealand whose art depicts the world, not as humans see it, but as algorithms do.
To humans, the pictures look like haphazard arrangements of lines and blobs that lack any obvious immediate structure. But to algorithms trained to see the world on our behalf, they leap off the page as specific objects: electric fans, sewing machines, and lawnmowers. The prints are optical illusions, but only computers can see the hidden image.
- An electric fan
- A pair of binoculars
- A cello
- A tick
- An image that’s labelled as “inappropriate content” by online filters
- And another
White’s work has attracted a lot of attention in the machine learning community, and it’s getting its first major gallery show this month as part of an exhibition of AI artwork in India at Delhi’s Nature Morte gallery. White says he designs his prints to “see the world through the eyes of a machine” and make “a voice for the machine to speak in.”
That “voice” is actually a series of algorithms that White has dubbed his “Perception Engines.” They take the data that machine vision algorithms are trained on — databases of thousands of pictures of objects — and distill it into abstract shapes. These shapes are then fed back into the same algorithms to see if they’re recognized. If not, the image is tweaked and sent back, again and again, until it is. It’s a trial and error process that essentially ends up reverse-engineering the algorithm’s understanding of the world.
White compares the process to a “computational ouija board,” where neural networks “simultaneously nudge and push a drawing toward the objective.” He tellsThe Vergethat this method gives him the control he wants out of the output, though it can take days to create a single image in this way, and he admits that the process is “kind of tedious.”
Unlike some artists who work with machine learning, White doesn’t pretend that his prints are the product of a some autonomous AI (a disingenuous narrative sometimes pushed by artists and promoters in order to create a feeling of technological mysticism). Instead, he’s up front about his role: he sets a number of starting parameters for his perception engines, like the colors and thickness of lines, and winnows the output, rejecting prints that he doesn’t find aesthetically pleasing. Although he is giving his algorithms a voice to speak in, he’s also making sure the results are pleasant to hear. “I think I am trying to free the algorithm so it can express itself, so people can relate to what it’s saying,” he says.
And whatisit saying? Well, as with any art, different people hear different things.
White says his motivation is primarily to deconstruct what we think of as machine perception. In other words: to explain the algorithmic gaze. Take the example of the cello print in White’s series “The Treachery of ImageNet.” If you know what you’re looking for, you can see shapes that represent the instrument (a cluster of straight parallel lines bracketed by curves). But there’s also a confusing shape looming behind it. White says these shapes are there because the algorithms were trained using pictures of cellos with cellists holding them. Because the algorithm has no prior knowledge of the world — no understanding of what an instrument is or any concept of music or performance — it naturally grouped the two together. After all, that’s what it’s been asked to do: learn what’s in the picture.
This sort of mistake is common in machine learning, and it demonstrates a number of important lessons. It shows how critical training data is: give an AI system the wrong data to learn from, and it’ll learn the wrong thing. It also demonstrates that no matter how “clever” these systems seem, they possess a brittle intelligence that only understands a slice of the world — and even that, imperfectly. White’s latest prints for the Nature Morte gallery, for example, are abstract smears of color designed to be flagged as “inappropriate content” by Google’s algorithms. The same algorithms used to filter what humans see around the world.
Still, White says that he doesn’t see his artwork as a warning. “I’m just trying to present the algorithms as they are,” he says. “But I admit it’s sometimes alarming that these machines we’re relying on have such a different take on how objects in the world are grounded.”
And despite the error-prone nature of algorithmic gaze, it can also do very beneficial things. Machine vision could make the world a safer place by steering cars safely on roads or save lives by speeding up medical diagnoses. But if we really want to use this technology for good, we need to understand it better. Looking at the world through an algorithm’s eyes might be the first step.