There is nothing more vital for an autonomic vehicle than being aware of the environment around it. Similar to humans, autonomous vehicles require the ability to make decisions instantly.
Today, most autonomous vehicles use several sensors to see the world. The majority of systems make use of radar sensors, cameras as well as LiDAR (light detection and ranging) sensors. Onboard, computers combine the data to give an overall picture of what is happening around the vehicle. With this information, autonomous vehicles can be safe on the streets. Vehicles with multiple sensor systems are more efficient and secure – each system serves as a safeguard against the other – however, there is no way to protect yourself from attacks.
However, the systems are partially foolproof. Camera-based perception systems are deceived by placing stickers on road signs and thereby altering the meaning of their signs.
Our research, conducted by our work with the RobustNet Research Group at the University of Michigan with computer scientist Qi Alfred Chen from UC Irvine and other colleagues who are from SPQR lab. SPQR lab has demonstrated that LiDAR’s perception systems could be a part of the system, too.
By spoofing sensors’ LiDAR signals, this attack can trick the vehicle’s LiDAR perception engine into “seeing” a nonexistent obstacle. If this occurs, the vehicle may cause a crash due to slowing down abruptly or blocking traffic.
Spoofing LiDAR signals
The perception systems based on LiDAR have two parts: the sensor and the computer model that process the sensor’s information. A LiDAR sensor determines the distance between it and its surroundings by emitting an electromagnetic signal and determining the time it takes the signal to travel over an object before returning to the sensors. This time-lapse is referred to as the “time of flight.”
A LiDAR unit can send out hundreds of thousands of signals each second. The machine learning algorithm uses the received pulses to draw a map of the universe within the vehicle. This is similar to the way bats use echolocation to find out the location of obstacles at night.
The issue is that these pulses can be faked. In order to fool the sensor, attackers can project their or their light signal to the sensor. All you have to do is make the sensor confused.
However, it is a lot more challenging to alter the LiDAR sensor to “see” a “vehicle” that is not there. To succeed, an attacker must precisely time the signal shot at the victim’s LiDAR. This must be done at the nanosecond scale because signals travel at the rate of light. Minor differences will be apparent if LiDAR calculates the distance based on the measured time-of-flight.
If an attacker is successful in tricking the sensor that uses LiDAR, the attacker must also fool its machine-learning model. Research conducted by OpenAI’s OpenAI Research Lab has shown that the models that learn are susceptible to specially designed inputs or signals, referred to as adversarial examples. For instance, specially designed stickers placed on traffic signs could deceive camera-based perception.
We discovered that attackers could employ the same technique to create modifications that can be used against LiDAR. These would not be apparent stickers but rather spoofed signals designed to fool computer models into believing there are obstacles in the area when there are not. The LiDAR sensor can feed fake signals from hackers to the machine-learning model, which will identify them as obstacles.
An example of an adversarial is a fake object. It could be designed to meet the requirements of the machine learning algorithm. For instance, the attacker could create the signal of a vehicle that is not moving. To carry out their attack, they may place it at an intersection or on a vehicle placed ahead of the autonomous car.
Two possible attacks
To show the design attack, we used an autonomous driving system used by various car manufacturers: Baidu Apollo. The company has more than 100 collaborators and signed an agreement for mass production with various manufacturers, such as Volvo Ford and Volvo. Ford.
With the help of real-world sensor data gathered from Baidu Apollo team members, we have the Baidu Apollo team, and we showed two different ways to attack. In the first one, known as an “emergency brake attack,” we demonstrated how an attacker could suddenly stop a vehicle moving using a trick to make it appear as believing that an obstacle was in the route. In the second case, the “AV freezing attack,” we employed a fake obstacle to deceive an automobile stuck at the red light to remain at a stop until the light changed to green.
By exploiting the weaknesses in autonomous driver perception systems, we aim to raise alarms for those working on autonomous technologies. The research into new security issues in self-driving systems is only starting, and we hope to find more issues before criminals discover them.