Apple may have just reignited rumors about its Apple Car ambitions by publishing an online research paper — describing how neural networks can be used to detect autonomous car-friendly objects in 3D point clouds.
The results could be used to help improve accuracy in LiDAR technology, in which pulsed laser light is used to measure the distance of objects.
The paper, titled “VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection,” is the work of Apple AI researcher Yin Zhou and machine learning expert Oncel Tuzel. It describes how 3D point clouds can make it difficult for technologies, such as autonomous vehicles, to detect objects with the necessary precision and speed.
As a response, Zhou and Tuzel describe training a neural network — called VoxelNet — to learn complex features for recognizing 3D shapes. The results were able to outperform current LiDAR based-detection algorithms and image-based approaches “by a large margin.”
The neural network was trained to recognize three basic types of object, including a car, a pedestrian, and a cyclist.
Apple’s autonomous ambitions
Apple’s autonomous car project recently turned up in the news again after the company’s self-driving car was spotted by the CEO of a self-driving car start-up called Voyage.
Apple’s revised autonomous vehicle looks to be packing six Velodyne-made LiDAR sensor, some radar units and a lot of other cameras, all encased in a white plastic shell. While it’s not as streamlined as some of the rival technologies out there, it’s definitely looking more serious than the last rig we saw out of Cupertino.
Interestingly, the technology described in Apple’s new research paper isn’t dissimilar from the way that the iPhone X recognizes faces for its Face ID facial recognition technology — which also relies on a depth-sensing point cloud using lasers.
Via: Apple Insider