Quantcast
Mobileye CEO Shashua expects more autonomous vehicles on the road in 2 years as tech moves ahead – Metro US

Mobileye CEO Shashua expects more autonomous vehicles on the road in 2 years as tech moves ahead

DETROIT (AP) — Six years ago, automakers and tech companies thought they were on the cusp of putting thousands of self-driving robotaxis on the street to carry passengers without a human driver.

Then an Uber autonomous test vehicle hit and killed a pedestrian in Arizona, multiple problems arose with Tesla’s partially automated systems, and General Motors’ Cruise robotaxis ran into trouble in San Francisco.

Yet the technology is moving ahead, says Amnon Shashua, co-founder and CEO of Mobileye, an Israeli public company majority owned by Intel that has pioneered partially automated driver-assist systems and fully autonomous technology.

Already, Mobileye systems are at work in vehicles that take on some driving functions such as steering and braking, but a human still has to be ready to take over. Systems that let drivers take their eyes off the road and fully autonomous systems are coming in about two years.

Shashua talked with The Associated Press about the next steps toward autonomous vehicles. The interview has been edited for length and clarity.

A: When you talk about autonomous vehicles, what immediately comes in mind is Waymo, Cruise, robotaxis. But the story is much more nuanced. It really opens up how the future of the car industry is going to look. It’s not just robotaxis. I would frame it as three stories. The first one is about safety. Today you have a front-facing camera, sometimes the front-facing radar. There are functions that enable accident-avoidance. You can take safety to a much higher degree by having multiple cameras around the car and provide a much higher level of safety. An accident would be very rare.

The second story is to add more redundant sensors like a front-facing lidar (laser), like imaging radars and start enabling an eyes-off (the road) system so it’s hands-free, eyes-off (the road). You are allowed legally not to pay attention and not to be responsible for driving on certain roads. It could start from highways and then add secondary roads. This is a value proposition of productivity, of buying back time. If you are driving from San Francisco to Los Angeles, 90% of the time you are on interstate highways. You kind of relax and legally do something else, like work on your smartphone.

Then comes this third story. This is the robotaxi where there’s no driver, and we are utilizing the car to a much higher level and enable moving people like Uber and Lyft at a much more efficient, economical state because you don’t have a driver.

A: Mobileye’s SuperVision, which is now on about 200,000 vehicles in China and will start to expand to Europe and the U.S. this year, has 11 cameras around the car, provides a hands-free but eyes-on system. The second story of an eyes-off system on highways is already in the works. Mobileye announced that we have a global Western OEM (original equipment manufacturer). We call the system Chauffeur. Add a front-facing lidar and imaging radars and nine car models to be launched in 2026.

The third story: if you look at the success of Waymo, its challenge is not technological. It’s more about how to scale and build a business. Deployment of these kinds of robotaxis is slower than originally expected five years ago. But it is something that is really, really happening. Mobileye is working with Volkswagen on the ID. Buzz (van) to start deploying thousands of such vehicles in 2026.

A: If a driver works on a smartphone and there is an accident, you cannot come to the driver and say, “You are responsible, because I allowed you to do something else.” So this means that the bar in terms of performance of the system, we call this mean time between failure, that should be very high, much higher than human statistics. It’s a system of liabilities which is handled between the supplier and the automaker.

A: Tesla’s technical capabilities are very high. The question of whether this kind of system powered by only cameras can eventually be an eyes-off. This is where we part ways. We believe that we need additional sensors for redundancy. It’s not just a matter of improving the algorithms, adding more compute. You need to create redundancies, from a sensor point of view and from the compute point of view.