Single pixel change crazy ai programs

Anish Athalye

Computers can be fooled into thinking that the image of a taxi is a dog by changing just one pixel, suggests the research.

The limitations emerged from the Japanese are working on ways to deceive the widely used AI-based image recognition systems.

Many scientists are trying to create “contradictory” example of images to expose the fragility of some types of recognition software.

There is no quick and easy way to correct the image recognition systems to prevent them from being fooled in this way, warn experts.Bombardier or bulldog?

In their research, the League Jiawei and his colleagues at the University of Kyushu, made tiny changes to lots of pictures which were then analysed by the widely used AI-based image recognition systems.

All the systems they tested were based around a type of AI known as deep neural networks. Generally, these systems learn by training with many different examples to give an idea of the manner in which the objects, such as dogs and taxis, differ.

The researchers found that the change of one pixel in approximately 74% of the images to test the neural networks, mistakenly, the label of what they have seen. Some of the errors were near misses, like a cat being taken for a dog, but others, including the labelling of a stealth bomber of a dog, were much more wide of the mark.

Japanese researchers have developed a range of pixel-attacks that drew all the state-of-the-art image recognition systems that they have tested.

“As far as we know, there is no set of data or a network that is much more robust than others,” said Mr. Jiawei, from the island of Kyushu, who led the research.

Science Photo Library

Deep issues

Many other research groups around the world have now the development of “adversarial examples” that expose the weaknesses of these systems, said Anish Athalye of the Massachusetts Institute of Technology (MIT), which is also looking into the problem.

An example done by Mr. Athalye and colleagues is a 3D printed turtle that the image of a classification system insists on labeling a rifle.

“More and more in the real world systems are beginning to incorporate networks of neurons, and it is a great concern that these systems may be possible to subvert or attack using adversarial examples,” he told the BBC.

While there had been no examples of malicious attacks in real life, he says, the fact that these so-called intelligent systems can be fooled so easily is worrying. Web giants, including Facebook, Amazon and Google are all known to be to find ways to resist the confrontation of the operation.

“This is not a weird little corner case”, he said. “We have shown in our work, you can have a single object that always fools a network of more points of view, even in the physical world.

“The machine learning community is not well understand what is happening with the confrontation of examples or why they exist,” he added.

Mr. Jiawei speculated that the confrontation of examples to exploit a problem with the way neural networks form as they learn.

A learning system based on a neural network usually consists of connections between a large number of nodes – such as nerve cells in the brain. The analysis requires the network to make a lot of decisions about what he sees. Every decision has to lead the network closest to the correct answer.

However, he said, for a comparison of the images sitting on the “borders” between these decisions, which meant that it did not take much to force the network to make the wrong choice.

“The opponents can make them go to the other side of the border, by adding small perturbations and, finally, to be confused with the,” he said.

Fixing deep neural networks, so that they were no longer vulnerable to these issues may be difficult, said Mr. Athalye.

“It is an open problem,” he said. “There have been many techniques proposed, and almost all of them are broken.”

A promising approach is the use of the confrontation of the examples of the training courses, said Mr. Athalye, so that the networks are learned to recognize them. But, he said, even if this does not solve all the problems exposed by this research.

“There is definitely something strange and interesting here, we don’t know exactly what it is,” he said.