Hackers forced a Tesla to enter the wrong lane

Read Article

Hackers have demonstrated how they could trick a Tesla Model S to enter into the wrong lane by using a method called “adversarial attack,” a way of manipulating a machine learning (ML) model.

The Tesla Autopilot recognises lanes and assists control by identifying road traffic markings. The researchers from the Keen Security Lab of Chinese tech giant Tencent showed that by placing interference stickers on the road, the autopilot system could be fed information that would force it to make an abnormal judgement and make the vehicle enter a wrong lane.

“In this demonstration, the researchers adjusted the physical environment (e.g. placing tape on the road or altering lane lines) around the vehicle to make the car behave differently when autopilot is in use,” a Tesla spokesperson was quoted as saying in a Keen Security Lab blog.

“This is not a real world concern given that a driver can easily override autopilot at any time by using the steering wheel or brakes and should be prepared to do so at all times,” the spokesperson said.

According to a report in The Download – MIT Technology Review this month, adversarial attacks could become more common as machine learning is used more widely, especially in areas like network security.


If you have an interesting article / experience / case study to share, please get in touch with us at editors@expresscomputeronline.com

Adversarial Attackhackersmachine-learningTesla AutopilotTesla Model S
Comments (0)
Add Comment