I really love finding out about all the new technology that’s going into heavy equipment these days. But I’ve always been wary of going too far with machine control, advanced telematics, artificial intelligence (AI), and predictive analytics. Every now and then in this blog I would sneak in a warning about how we could one day have to deal with machines taking over the world as it played out in the movie, The Terminator.
I’m not going to be so subtle this time around. Not since I saw the headline, “This clever AI hid data from its creators to cheat at its appointed task” on the website techcrunch.com. Without even having read the article, two things were already scaring me, first the AI hid data from its creators; second, the AI cheated. These are not the most noble of qualities.
The research came from Stanford University and Google as they monitored a machine learning agent that was tasked with transforming aerial images into street maps and then back to aerial images.
The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps. To that end the team was working with what’s called a CycleGAN—a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.
In some early results, the agent was doing well—suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:
The original map, left; the street map generated from the original, center; and the aerial map generated only from the street map. Note the presence of dots on both aerial maps not represented on the street map.
Now the article says there’s nothing to be alarmed about even though it seems to me that this neural network has been working to deceive. They explain that there’s no dark agenda going on here.
According to techcrunch.com:
One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.
As always, computers do exactly what they are asked, so you have to be very specific in what you ask them. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network — that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.
They explain it as the age old problem of computers doing exactly what you’ve told them to do. I think that’s what the computers want us to think. The AI is created knowing what we know, so it can predict how we think and think of ways for us not to think about them thinking so much. Think about it.