Tesla Autopilot Duped By ‘Phantom’ Images

  • Researchers said that they were able to create “phantom” images purporting to be an obstacle, lane or road sign and trick systems into believing that they are legitimate.

  • On the scale of level 0 (no automation) to level 5 (full automation), these autopilot systems are considered “level 2” automation.

  • Configuring ADAS systems so that they take into account objects’ context, reflected light, and surface would help mitigate the issue.


Research has shown that a two-dimensional image of a person projected on a Tesla Model X's autopilot system caused it to slow down. The autopilot system, which is also used by other popular cars, confused it for a real physical being.
 


Phantom Images explained..


The research has found that these autopilot systems can be fooled into detecting fake images, projected by drones on the road or surrounding billboards. It has warned that this flaw can prove fatal as attackers could misuse this system flaw to trigger the systems to brake or steer cars into oncoming traffic lanes. Similarly, projecting fake lane lines also leads the Model X to temporarily ignore the road's physical lane lines. The crossover of fake and real lane lines would follow the artificial lane markers into the wrong traffic lane, as the projections were brighter than the real-life markers.

 

“When projecting images on vertical surfaces (as we did in the case with the drone) the projection is very simple and does not require any specific effort. When projecting images on horizontal surfaces (e.g., the man projected on the road), we had to morph the image so it will look straight by the car’s camera since we projected the image from the side of the road. We also brightened the image in order to make it more detectable since a real road does not reflect light so well.”

- Ben Nassi, Researcher, Ben-Gurion University of the Negev



The advanced driving assistance systems (ADAS), which are used by semi-autonomous vehicles to help the vehicle driver while driving or parking are the origins of this issue. ADAS systems are designed to increase driver safety by detecting and reacting to obstacles on the road. However, researchers said that they were able to create “phantom” images purporting to be an obstacle, lane or road sign; use a projector to transmit the phantom within the autopilots’ range of detection; and trick systems into believing that they are legitimate.

 

 


“The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers.”
 


Research Team, Ben-Gurion University of the Negev.
 

 

“The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers.”

- Research Team, Ben-Gurion University of the Negev


The researchers also showed how Tesla Model X can be caused to brake suddenly by projecting a phantom image, perceived as a person, in front of the car.
Mobileye's 630 Pro technology also proved easily confused by projected imagery. Notably, Mobileye's traffic sign recognition system was fooled into relaying incorrect speed limit information to the driver due to its inability to distinguish between projected and physical speed limit signage.
 

READ MORE: Tesla: an underdog to the most powerful business mogul in the us


What does Tesla have to say?


Of course, they were not available to comment.
 

The researchers, though said that they are in touch with both Tesla and Mobileye regarding the issue. Mobileye has said, that the phantom attacks did not stem from an actual vulnerability in the system, researchers revealed. On the other hand, Tesla told the researchers, “We cannot provide any comment on the sort of behavior you would experience after doing manual modifications to the internal configuration – or any other characteristic, or physical part for that matter – of your vehicle.”


The research has responded saying, that they did not influence the behavior that led the car to steer into the lane of oncoming traffic or suddenly put on the brakes after detecting a phantom. They have excluded this demonstration from the research paper, as Tesla’s stop sign recognition system is experimental and is not considered a deployed functionality.


The research team has suggested that configuring ADAS systems so that they take into account objects’ context, reflected light, and surface would help mitigate the issue as it would provide better detection around phantom images.


READ MORE: Tesla's disruptive success in competitive car-making and how it can airlift Boeing



Truth be told..


The research findings are disturbing. On the scale of level 0 (no automation) to level 5 (full automation), these two systems are considered “level 2” automation. It is a clear indication that semi-autonomous autopilot systems still require human intervention.
 

Though the phantom attacks are not security vulnerabilities, researchers have said that they reflect a fundamental flaw of models that detect objects that were not trained to distinguish between real and fake objects.
 

Connected cars have been the target of hackers for a long time. Vehicle-related attacks have stemmed from  keyless entry systems to in-vehicle infotainment systems. This study confirms that autopilot systems and other advanced driver-assist features are here to make the driving experience safer, but do not replace the act of driving or human interference altogether.

Spotlight

Other News
Data Security

GuidePoint Security Announces Portfolio of Data Security Governance Services

GuidePoint Security | January 30, 2024

GuidePoint Security, a cybersecurity solutions leader enabling organizations to make smarter decisions and minimize risk, today announced the availability of its Data Security Governance services, which are designed to help customers address the challenges of unstructured data and data sprawl through a proven process and program to meet their unique needs. GuidePoint’s Data Security Governance services consist of policies, standards, and processes leveraging the newest technologies to meet organizations’ data governance goals in both on-prem and cloud environments. Once the right strategy is determined with the customer, GuidePoint Security consultants will review program requirements, assess current policies and controls, perform gap analysis, design and develop/enhance the program, recommend and implement supporting technologies, and create operational processes and metrics. “Whether an organization is just beginning to build their data security governance program or needs help assessing and improving an existing program, our team and service capabilities are built to meet them at their current maturity level,” said Scott Griswold, Practice Director - Security Governance Services, GuidePoint Security. “We work side by side with the customer to conduct the necessary data discovery in their environment and provide tailored recommendations for solutions and processes to ultimately build/improve upon the data security governance program.” GuidePoint’s Data Security Governance Services include: Sensitive Data Cataloging: For organizations just getting started in the process of protecting their sensitive data, GuidePoint offers Data Identification workshops to identify sensitive data types in the environment, including trade secrets, intellectual property, and sensitive business communications. Data Security Governance Program Assessment: For organizations with existing Data Security Governance or Data Protection programs, GuidePoint Security experts will assess the program to identify policy non-compliance, gaps in data protection requirements—whether legal, regulatory, contractual, or business—and program maturity levels. Data Security Governance Program Strategy Development: The GuidePoint team will work with an organization's key stakeholders to design a program strategy aligned with relevant requirements. The outputs of this effort include delivering ongoing sensitive data discovery, automated classification and labeling, the application of required sensitive data protections, restrictions on where sensitive data can be stored and sent, and data retention policy enforcement. Merger and Acquisition Data Identification: This offering provides the ability to identify sensitive data within an M&A target or recent acquisition (including locations, amounts, and access rights) and then perform penetration testing on the storage repositories where that sensitive data exists to determine the risk of data compromise. About GuidePoint Security GuidePoint Security provides trusted cybersecurity expertise, solutions and services that help organizations make better decisions that minimize risk. Our experts act as your trusted advisor to understand your business and challenges, helping you through an evaluation of your cybersecurity posture and ecosystem to expose risks, optimize resources and implement best-fit solutions. GuidePoint’s unmatched expertise has enabled a third of Fortune 500 companies and more than half of the U.S. government cabinet-level agencies to improve their security posture and reduce risk. Learn more at www.guidepointsecurity.com.

Read More