It’s the end of the year, which means the world gets a new iPhone. And in the case of the iPhone 12, we’re getting five new ones. This time, the iPhone has a small surprise in store: an integrated Light Detection and Ranging (LiDAR) sensor. That means that the iPhone got LiDAR before Tesla—or basically any other manufacturer outside of Audi.
Automakers are all sprinting toward the goal of self-driving cars, although each one is headed down a different path to achieve it. Some are choosing LiDAR to map out vehicle surroundings, while others have instead chosen to use vision-based systems to achieve the same result at a lower cost.
But Tesla CEO Elon Musk—effectively the king of consumer-grade partial automation right now—has been historically insistent that vision-based automation is the future and that LiDAR is nothing more than a crutch for companies who can’t grasp machine learning to process camera output.
“LiDAR is a fool’s errand,” Musk said during Tesla’s first Autonomy Day event in 2019. “Anyone relying on LiDAR is doomed. They are expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.”
LiDAR works by emitting directional beams of invisible light that bounce off nearby objects. These beams are reflected back to the LiDAR unit and a measurement for the Time of Flight—the time it took for the beam to leave the emitter and reach the receiver—is used to paint a digital picture of the unit’s surroundings. That includes how far away the object is from the sensor. The unit will then build a map of its surroundings by processing all of the measurements into a three-dimensional point cloud.
For the iPhone 12, the LiDAR sensor is worked into Apple’s ARKit to supplement its current vision-based augmented reality offering. The inclusion of LiDAR can help make camera autofocusing quick and accurate while also placing the Snapchat dancing hotdog on surfaces with better accuracy.
But for vehicles with partial automation, LiDAR can be used to help improve precision in self-driving decisions. Unlike the iPhone’s static LiDAR unit, vehicles doing autonomy testing typically have large rotating contraptions to the roof of their vehicles. Waymo calls its own solution the Laser Bear Honeycomb. These supplement vision-based systems by adding a layer of data on top of what the cameras can process. For example, if it’s too dark for a camera to pick up certain objects, the LiDAR unit can still recognize that something is in its path.
A prime example of this calls back to the fatal crash involving a partially automated Uber vehicle and pedestrian in Tempe, Arizona. The LiDAR unit, which was supplied by Velodyne, picked up the presence of 49-year-old Elaine Herzberg six seconds before the accident. However, the systems in place which processed the data received from the LiDAR unit failed to recognize that an emergency braking procedure was necessary to avoid a collision until 1.3 seconds before impact.
Likewise, LiDAR can pick up the presence of stationary objects. Tesla’s Autopilot has undergone criticism for its failure to recognize stopped vehicles, including emergency vehicles and overturned trucks. So while it is certainly more affordable, the potential shortcomings of not being able to recognize such objects due to lighting, weather conditions, or a myriad of other reasonings seem to be an inherently large risk for a company that supposedly plans to launch a million robotaxis by the end of the year.
And this hasn’t been the first example of Tesla’s Autopilot being fooled by something seen on its cameras. A great example of this is the Model S that was tricked into accelerating due to some black tape slapped onto a speed limit sign. You can read more about that here, but the point is the same: a visual cue caused the vision-based software to behave unexpectedly. Remember—if you can trick a human, chances are that you can also trick a camera.
Musk has historically been anti-LiDAR; however, it would be naive to disregard the shortcomings of vision-based autonomy in its current state. Does this mean that Tesla can’t solve its problems with more mature software? No, and the company is looking to do exactly that by launching testing for its rewritten “Full Self-Driving” suite as early as next week. But maybe it would be wise to predict that as partial autonomy grows, a hybrid of LiDAR and vision-based systems may be the key to rapid development.
At any rate, know that your next potential phone may have a key feature a Tesla doesn’t have.
Got a tip? Send us a note: email@example.com
Source: Read Full Article