Follow
Subscribe

Toyota's Woven Planet is Now ‘Training' its Self-Driving Development Vehicles Using Data From Low-Cost Cameras, an Approach Used by Tesla

Home > News > Content

【Summary】Toyota’s Woven Planet Holdings, which is working on advanced technology for the automaker including autonomous driving, is taking a new camera-based approach in advancing its self-driving development without the use of expensive vehicle sensors such as lidar, Reuters reported on Wednesday. The team at Woven Planet views it as a "breakthrough" that may help drive down costs and better scale Toyota’s autonomous driving technology.

Eric Walz    May 13, 2022 10:45 AM PT
Toyota's Woven Planet is Now ‘Training' its Self-Driving Development Vehicles Using Data From Low-Cost Cameras, an Approach Used by Tesla
A camera pod to collect data developed by Toyota's subsidiary Woven Planet is seen atop a test vehicle in the San Francisco Bay Area on April 5, 2022. (Photo: Woven Planet/ Handout via REUTERS)

Toyota's Woven Planet Holdings, which is working on advanced technology for the automaker including autonomous driving, is taking a new approach in advancing its self-driving development without the use of expensive vehicle sensors such as lidar, Reuters reported on Wednesday.

Woven Planet is joining Tesla in refining its autonomous driving systems using low-cost cameras on the vehicles, instead of using more expensive sensors such as lidar.

Tesla Chief Executive Elon Musk has been an outspoken critic of lidar for automated driving. Musk referred to the use of lidar as a "fool's errand" despite its widespread use by automakers and dozens of autonomous driving startups. 

Musk views lidar as "unnecessary" for autonomous driving and the company's camera-based Full Self Driving (FSD) and Autopilot systems do not use lidar data to navigate. Now Toyota's Woven Planet will take the same approach to refine its software.

Woven Planet told Reuters in an interview that it's able to use low-cost cameras to collect data and effectively train its self-driving system just as well as using expensive sensors. The team at Woven Planet views it as a "breakthrough" that may help drive down costs and better scale Toyota's autonomous driving technology. 

Autonomous driving deep-learning algorithms rely on massive amount of training data, which help them to get better over time. The data is collected from a sensor suite on the vehicles, which generally includes cameras, radar and lidar. These systems can be further refined in computer simulation environments built using real-world data collected from vehicles.

The data collected from the individual sensors is stitched together in a process known as "sensor fusion".

The problem is that collecting all of this data is a time consuming effort. Tesla, for example, gathers camera data from a networked fleet of over 1 million vehicles on the road to improve its autonomous driving technology. That's a lot of data.

For Toyota, gathering diverse driving data using a massive fleet of cars is critical to developing a robust self-driving car system, but it is costly and not scalable enough to test and refine autonomous vehicles using expensive sensors, Woven Planet told Reuters.

"We need a lot of data. And it's not sufficient to just have a small amount of data that can be collected from a small fleet of very expensive autonomous vehicles," Michael Benisch, vice president of Engineering at Woven Planet, said to Reuters in an interview.

"Rather, we're trying to demonstrate that we can unlock the advantage that Toyota and a large automaker would have, which is access to a huge corpus of data, but with a much lower fidelity," he added. 

Benisch is a former engineering director at ride-hailing company Lyft Inc's Level-5 self-driving division, which was acquired by Woven Planet Holdings in April of last year for $550 million.

Before Toyota acquired the assets of Lyft's Level 5, the company was collecting data from low cost cameras installed in its driver partner vehicles. The camera data also helped Lyft to improve its HD maps used by autonomous vehicles to navigate. Lyft discovered that simple dash cams used by its Lyft's drivers were an ideal way to collect additional data while the vehicles were operating in urban areas.

The camera data included footage of intersections, bicyclists, pedestrians, as well as the behavior of other drivers while the Lyft driver was out and about picking up passengers.

Lyft used a combination of 3D computer vision and machine learning to automatically identify traffic objects from the camera feed, such as other vehicles, pedestrians and road signs.

Woven Planet is using cameras that are 90% cheaper than sensors that it used before, according to Reuters. The cameras can be easily installed in fleets of passenger cars.

Woven Planet told Reuters that a majority of data coming from low-cost cameras increased its system's performance to a level similar to when it was trained exclusively on high-cost sensor data such as lidar.

The camera data also helps Woven Planet to better understand human driving patterns. Using visual localization technology, Woven Planet can better track the real-world trajectories that human drivers follow when making turns or traveling in a lane with a greater level of accuracy. 

Collecting data of human driving patterns from cameras will allow Toyota to create autonomous driving software that more closely mimics how human drivers navigate in an urban environment.

Lyft was able to create one of the largest datasets in the world of real world driving scenarios using this method. With the acquisition of the Lyft's Level-5 unit, the torch has now been passed to Toyota to improve upon it.

For commercial self-driving vehicles, such as robotaxis that will be carrying passengers, Toyota plans to use a comprehensive sensor suite on the vehicles. These vehicles will use common suite sensors, which are cameras, lidar and radars. Woven Planet determined that this is the best and safest approach to developing robotaxis at scale.

Benisch told Reuters that its entirely possible that camera-based autonomous driving technology that's highly regarded by Elon Musk for Tesla FSD and Autopilot, can catch up and overtake using lidar and radar sensors.

Current advanced driver assist systems offered on many new vehicles also rely on forward facing cameras for features such as lane-keeping assistance (LKA), automatic emergency braking (AEB), automated valet parking and adaptive cruise control (ACC).

In May of 2021, Tesla announced it was replacing a radar sensor on the Model 3 and Model Y vehicles sold in North America with a camera to better support Tesla's Autopilot and its latest FSD feature.

Toyota was awarded the most U.S. patents than any other automaker in 2021, according to a recent annual ranking by the Intellectual Property Owners Association (IPO), an industry trade group for owners of patents, trademarks, copyrights and other trade secrets. 

A significant portion of these patents were awarded to the Woven Planet Group, which was founded in 2018 as an expansion of the operations of the Toyota Research Institute (TRI). The unit is tasked with developing autonomous driving and other advanced mobility technology for the automaker.  

The Woven Plant Group of Toyota includes four companies, Woven Planet Holdings, Woven Core, Woven Alpha and Woven Capital. The four companies are developing autonomous driving technology, robotics, smart city technology and advanced mobility solutions.


resource from: Reuters

Prev                  Next
Writer's other posts
Comments:
    Related Content