The Art of Making Autonomous Cars Hallucinate – PIEUVRECA

The Art of Making Autonomous Cars Hallucinate – PIEUVRE.CA

Have you ever seen a dark figure out of the corner of your eye and thought it was a person, only to breathe a sigh of relief when you realized it was a coat rack or… another everyday item in your home? This is a harmless visual phenomenon, but what would happen if the same trick were used on a self-driving car or drone?

This question isn’t hypothetical, says Kevin Fu, a professor of computer science and engineering at Northeastern University who specializes in the discovery and use of new technologies.

The latter claims to have found a way to hallucinate the autonomous cars that many dream of putting on the road.

By discovering an entirely new form of cyberattack, a form of machine learning that Professor Fu and his team have dubbed “poltergeist attacks,” the researcher hopes to stay one step ahead of hackers who could exploit these technologies… with catastrophic consequences.

“There are just too many things we take for granted,” says Professor Fu. “I’m sure I do, and I don’t realize it, because we tend to reject abstract things, otherwise we would never be able to leave the house. The problem with this abstraction is that it hides things that could be tracked at a technical level, but they are rather hidden and allow certain things to be assumed. »

“It may be a one-in-a-billion chance, but when it comes to IT security, opposing parties take advantage of that one-in-a-billion chance at every opportunity,” the professor added.

This poltergeist attack goes beyond blocking or compromising technology, as is the case with other forms of cyberattacks. Professor Fu says his method creates “consistently false realities,” or optical illusions for computers that rely on machine learning to make decisions.

This type of computer attack exploits the image stabilization process found in most modern cameras, whether they are in a phone or a self-driving car. This technology is intended to detect the photographer’s movement and instability and adjust the lens to ensure the photos do not come out blurry.

“Normally it is used to improve sharpness, but because there is a sensor inside and these sensors are made of materials, if you reach the resonant frequency of these materials you can break a wine glass, like the opera singer hitting the note. “If you hit the right note, you can trick these sensors into transmitting false information,” says Professor Fu.

Ghost images (and objects)

By determining the resonant frequencies of the materials of these sensors, which are typically in the ultrasonic range, the researcher and his team were able to transmit sound waves toward camera lenses, blurring images.

“You can then start creating fake silhouettes from these blurred images,” says Professor Fu. “So if there is machine learning, for example in a self-driving car, then the computer starts making mistakes when identifying objects. »

By exploring this method, Professor Fu and his team were able to change the way self-driving cars and drones perceive their surroundings. To the human eye, the blurry images produced by a poltergeist attack might look like nothing. But by subverting a driverless car’s object detection algorithm, the silhouettes and ghosts conjured by these attacks turn anything into people, stop signs, or whatever the attacker wants – or doesn’t – to show a vehicle.

For a smartphone, the impact is significant, but for autonomous systems installed in fast-moving vehicles, the consequences could be particularly dramatic, says Professor Fu.

For example, the researcher says, it would be possible for a self-driving car to see a stop sign when there is nothing on the road, which could lead to a sudden stop on a busy road. A poltergeist attack could also cause an object, including a person or another car, to “disappear,” causing the vehicle to continue moving and drive into the invisible “object.”

“It depends on many more things, such as software configuration, but it is starting to show cracks in the digital wall that usually pushes us to trust machine learning,” says the professor.

The latter hopes that engineers will eliminate this type of vulnerability. If not, as machine learning and autonomous technologies become more widespread, Fu warns of the growing scope of this threat to consumers, businesses and the entire tech world. .

“Proponents of new technologies want consumers to adopt them, but if these technologies are not truly protected from such cybersecurity threats, people will not trust and use these new technologies,” the professor judges again. We will then see a decades-long setback where these technologies simply are not used. »

Subscribe to our extensive newsletter

Support Octopus.ca for just $5 per month