(iTers News)- Few other players in the autonomous car ecosystem are better poised than MobilEye to tap into new market opportunities for self-driving. Well known for  its powerful, but energy-efficient vision computing processor chip EyeQ, the Israel-based fabeless chip maker has secured chip design engagements with 27 car makers across the world so far,  having shipped over 15 million EyeQ vision processor chips to power the car makers’  ADAS, or advanced driving assistant system.

As 4.5 million more EyeQ chips are expected to be shipped this year alone, the company will have about 20 million EyeQ chip-embedded cars on the roads across the world, which will gather data on streets and highways in almost every part of the world.

The fleet of more than 20 million EyeQ-built-in- cars will help the Israel chip maker and its ecosystem partners to create real-time, scalable, automated high definition road mapping system as well as self-reinforced and networked AI machine learning technology.

Both of the two technologies are referred to the last pieces of the technology puzzle to deliver on global car industry’s long dream to take Level 4 and Level 5 fully autonomous cars to the streets.

In 2016, MobilEye presented a crowd-sourced-mapping technology called as REM or road experiences management, laying out the foundation for Level 3 semi-autonomous cars. The crowd-sourced road mapping technology is all about the way how to reflect details like road landmarks and lanes as well as changes in road environments in real time to create high-definition or HD map. MobilEye joins hands with ecosystem partners like map makers HERE and Zenrin of Japan to create real-time, automated, and localized HD maps.

The company is also now working together with car maker BMW and PC microprocessor chip giant Intel Corp. to commercialize a networked, self-reinforced machine learning technology called as “Driving Policy”, which the chip maker refers to as the last mile to Level 4 and 5 fully autonomous cars

The crowd-sourced road mapping technology is all about the way how to reflect details like road landmarks and lanes as well as changes in road environments in real time to create high-definition or HD map.

MobilEye’s REM works the magic by sending back and forth sensing data between cars and cloud computing servers.

Most of Level 3 semi-autonomous cars to be shipped starting from 2018 will come embedded with REM technology.  
MobilEye CTO Amnon Shashua

    “We presented a crowd-sourced mapping technology last year. The idea, which we want to leverage, is the fact that almost all new cars coming today have already a front facing camera. Front-facing camera is very sophisticated in terms of sensing capacity and a lot of AI (algorithm) going on in camera sensing. So, camera can understand environment, front-facing environments, identify landmarks, identify lanes, and harvest information to create maps. It also uses landmarks for localization. Because crowd-sourcing uses equipment already existed in cars today, we are talking about very, very minimal costs. Costs are very important in this industry, because the motivation to going to autonomous driving must be based on economical values. Since maps are critical to achieving autonomous driving, one has to find a way to have this map done almost –cost free, “ Amnon Shashua, CTO and co-founder of MobilEye.

The followings are excerpts of what he spoke of at the press conference for CES 2017, which was held on Jan. 5 in Las Vegas.

3 key technological pillars   

  MobilEye identifies three major areas which the company calls as three technological pillars. The industry needs to address them before it will have had fully autonomous driving.

The first is an obvious one; sensing. Cars have sensors around them; cameras, LiDAR laser scanner, and radar sensors. All these sensing information are sent to high performance computers. The purpose is to understand surroundings and building environments. Other two elements are relatively anew. One is about mapping. In order to create right redundancy, one has to have a very detailed map, which MobilEye calls high-definition map.

To localize cars inside this map, it has to have a very high accuracy of 10 centimeters. It is a way beyond GPS. It is not only a technological problem, but also logistical problem. It is all about how to update the maps and reflect them within a second, if there are changes in the environment. The third is “Driving Policy”. CTO Amnon sees this as an Achilles’ hill for the entire industry. It is from sensing to mapping to action. It is all about how to handle and sail through busy traffic.

Roadmap for Level 3 and Level 4/5 rollout 

MobilEye has 5 Level 3 semi-autonomous car development engagements. Highlighting them Level 3 autonomous driving car from Audi, which is called as zFAS.

Audi presented it back in 2015 in a keynote presentation ere at CES. The zFAS is the most sophisticated and ambitious driving assistant program coming out in 2017. It contains a centralized box. It has multiple sensors, and 26 degree sensing. It provides a takeover request of 10 seconds. This is something which is the industry’s first. It means if car want you to take control, drivers don’t need to take over it instantaneously. There is a grace period of 10 seconds, and they can take a control That’s why it is called Level 3.

Audi’s zFAS will hit roads in the second quarter of 2017, and other 4 Level 3 launches include one Level 3 autonomous car models from BMW in 2021, one from Nissan, two from Audi in 2019, and one from Volvo.

MobilEye has also a series of design engagements with 5 car makers to work together with to roll out Level 4 and 5 autonomous cars by 2021. They are Nissan, BMW, Audi, and electric vehicle makers Lucid and NIO.

Criteria for Level 3 semi-autonomous cars

Negotiaiton for merging in double lanes



 CTO Amnon Shashua sets forth very strict criteria for Level 3 semi-autonomous cars, calling for them to build in REM. And, it will also have to embed at least 3 trifocal or wide angle front cameras, of which angles are from 150 degrees to 52 to 28 degrees as well as 360 degree radar cocoon.

A 125 degree wide angle and front-facing LiDAR laser scanning and a rear-facing camera are required, too. According to him, a 120 degree wide angle 7.4 megapixel camera will go in to production by 2019.  He said that REM will be implemented into all of MobilEye’s production program for Level 3, 4 and 5 autonomous driving programs.

What’s REM

REM is the sort of crowd-sourced mapping technology.  Unlike a traditional mapping plan that a fleet of cars with special and expensive radar-scanning equipment are moving around roads across the word, the RIM-like crowd-sourced mapping technology is more cost-effective and scalable.

For example, REM can exploit and leverage MobilEye’s current data sensing and mining capability to create HD mapping, leveraging about 15 million EyeQ-embedded cars on the roads to harvest data in real-time.

"The general idea is that we can leverage current driving assistant (AI software inside EyeQ chip and sensors). There is not special equipment. The cost is minimal, because we leverage cameras already in cars. All we need is additional software with the ability to communicate. We are also leveraging cloud computing because bandwidth of data is very and very small. I am talking about 10 kilobytes per kilometers of driving. You can drive 100 kilometers with one megabyte. Then comes an automation. The entire process is automatically. Sensing in the car and sending to back-end cloud software is done automatically. So, there is no manual intervention. Then, we have a density and data source. We have millions of cars that have already cameras. Those millions of cars send information and then scaling up is very and very natural. So we can update maps. RIM is highly scalable and creates high-definition and live map at a very and very low cost. The only cost here is communicating 10 kilobytes per kilometer,” CTO Amnon.

Added he, “We have a car with our chips inside for sensing environment, and it sees and finds landmarks, pavement signs, traffic signs, billboards, anything stationary in the world. Then it sends landmarks and road lane information to a cloud computing server, which has an back-end software to piece together and create RoadBook, which is a high definition map and then transmits back to the car.

Partnership in ecosystem for HD mapping

 MobilEys’s REM is already well accepted across the industry. HERE, a live HD map maker will have RIM embedded in its Live HD Map, productizing MobilEye’s RoadBook as one layer in the map.

MobilEye has also an ambitious plan for REM in Japan. The company joins hands with Zenrin, Japanese map maker, to create HD map. Covering al highway networks in Japan, the HD map will be ready in 2018, and fully available in 2020 covering all city roads, as MobilEye, Zenrin, and other OEMs are working together to have it.

Such partnership is key to creating a world HD map which is the prerequisite for Level 3 and Level 4 automation. So, MobilEye is talks with other map makers and OEMs as well as car makers to cooperate to create a world RoadBook, which will be a turning for the car industry.

“REM is a low-cost and highly scalable enabler for supporting Level 3 to Level 5 autonomous driving. This is the reason why the industry has to cooperate,” stressed him.

Intel



 Roadmap for EyeQ vision computing chips

 The company also has a roadmap for its EyeQ image processor chips. Its first chip rollout –EyeQ 3- boasts 300 billion operations per second, or Tera-OPS, powering about 15 million in-vehicle ADASs. The second rollout EyeQ4 vision computing SoC does 2.5 tera-operations per second. By 2020, MobilEye will have on hand EyeQ5 which can process 15 trillion operations per second at a very 3 – 4 watt power budget.

“Driving policy”: Reinforcement learning  

 The last piece of the self-driving puzzle is what the company calls as “Deep Network Reinforcement Learning algorithm” that can mimic true human driving capabilities while maintaining strict functional safety boundaries.

Termed as “Driving Policy”, the development of this algorithm is now under way. The goal is to teach autonomous vehicles human intuition on how to merge or squeeze in traffic.

“Autonomous car we see today is very and very defensive Driving is a multi-agent game. This is not only a game that we sense the world and decide what to do, but also the world and environment reacts to know what we are doing. So, we need to negotiate. As long as other agents are human, robotic agents should adopt human-like negotiation skills in order to merge into the traffic. Otherwise, you needn’t to guarantee safety. Societies do not accept fatality, large numbers of fatalities due to robotic cars. Even though human involved large of number of cars, societies do not accept that robotic cars involve large number of fatalities. So, we have to guarantee safety. How to combine this human-level of negotiations with safety guarantee is a challenge, “ he explained.

For example, when cars get trapped into what’s called as “double lane merge”, where cars come from two sides, there should be deadlocks, if the traffic is so busy. So, in order to complete it, squeeze-in is not sufficient, because you may interfere in plans of other cars. Other cars not just slow down to allow you squeeze in. Other cars have other plans to merge. Squeeze in has rule, but double merge has no rules. Instead, car makers need to plan many seconds ahead, because you have an 100 meter stretch before merging. So you need to plan 5 seconds or 10 seconds ahead. Planning is computing. This is a very big bottleneck in the computing. So, the Driving policy is all about how to endow robotic cars with human level-like negotiation skills. That explains why machine learning is in dire need.

Yet, today’s machine learning is something about learning by observing data rather than programming by rules. True enough, it is much easier to observe and collect data than understanding the underlying rules of the problem people want to solve. One prime example is image recognition. If you try to program by rules what constitutes human faces, it is becoming very, very complicated to achieve high definition recognition. If you observe data, you can get much more performance. So today’s machine learning technology is so good at observing data and then figuring out what are the rules behind the problems to solve. Yet, the downside is that machine learning is based on statistical data you feed and the way for optimization is just to sample data. When it comes to all corner cases, it fails absolutely. That’s because there is some distinction between sensing and Driving Policy.

Sensing is the way to deal with present data, but Driving Policy is to deal with the future decision. And, machine algorithm for sensing has something to do with deep supervised   learning, while driving policy is an algorithm for reinforcement learning. The difference between the two learning algorithms is how to handle data.

"When we look at the supervised learning, the actions or predictions you take have no effects on environments. If I look at the images, whether I made mistakes, it doesn’t have effects on the environments. It means data and for training and validation can be obtained in advance. When I do update my software, I don’t need to recollect data from scratch. Data is fixed, and data is collected offline. When we are talking about “reinforcement learning. Our actions affect environments. I decided to accelerate, or slow down, and move to the left and the right, I am affecting environments and other road users. Therefore, data collection has to be done online. Every time when I need to upgrade software version, I need to collect data again. This is a big problem. I need to find corner cases. Every time I modifies software and gather data again, I am talking about huge amounts of data. That doesn't sound like attractive proposition. This is why machine learning is not used in the driving policy.  The question is how to guarantee a safety machine learning technology, ” explained he.

To test three critical dimensions of the reinforcement learning algorithm, MobilEye has played a simulation game for 100,000 times, under which of each session 8 simulation cars with the algorithm randomly were put in place to test double merging. The results proved stunning. The simulation test shows that there is zero accident or fatality, and the model had 99.88% success rate.   Only 212 out of 100,000 test sessions were failed. Yet, failure doesn’t mean accidents, but just means failing to take pre-programmed lanes. And it shows that it takes one millisecond for each car to process data on EyeQ4 vision chip. Given the fact that it consumes 10Hz, or 10 frames per second, it represents just 1% of the chip’s total processing power for image sensing. That translates into the MobilEye’s driving policy use just 1% of the EyeQ4 processing power for sensing.

Driving policy is also not only about technological portion, but also about logistics – the way how to mine different driving behaviors and attitudes in many different places and countries. So, that also requires cooperation among car makers and other ecosystem partners.

MobilEye has signed a deal with Intel and BMW to cooperate to build this reinforcement learning algorithm “Driving Policy” model. The Driving Policy will be part of the  architectural offerings to the industry. The company is also cooperating to set a Driving Policy model with Ottomatika, subsidiary of Delphi.

The algorithm for the reinforcement learning is now already in production.

Criteria for Level 4/5 fully autonomous cars     

The Level 4/5 must be deployable and drivable both in highways and urban areas, but in some cases just only for urban areas. Takeover requests are not instantaneous, but HD map is required.  When it comes to cameras, it has to implement eight field of view, or FOV long–range camera cocoon 360 degree around the cocoon, front and side-facing LiDARs, as well as radar cocoons.

Cooperation between BMW and Intel for Level 3, 4, and 4

The partnership was announced July 1, 2016. The partnership includes two autonomous driving car models –one for L3 highly autonomous driving and Level 4/5 fully autonomous driving. Rollouts of the two models are scheduled to hit roads in 2021. Test-runs will be done in 2017 with 40 BMW 7 series of cars, and when it comes to the peak time of the development, the test fleet will expand to include 250 cars. EyeQ 4 vision processing chips are embedded into the test cars, and later the will be replaced with EyeQ5, when they roll off the line by 2020. On top of that, BMW and MobilEye will jointly develop sensor fusion (MCUs) and Driving Policy algorithm and Intel will implement them into its SoC.

Partnership with Delphi for Level 4/5

The partnership with Delphi was announced on August 3, 2016. MobilEye is responsible for implementing surround vision systems and all the fusion on EyeQ5 chip. And the company is in charge of developing reinforcement learning and Driving Policy, which will be implemented on intel SoC, while Ottomatika is responsible for automated software driving algorithm.

The project also involves REM. The partnership aims to offer this self-driving solution for car makers. The whole development project involves setting up 8 FOV cameras and 3D for 360 degree sensing.

저작권자 © KIPOST(키포스트) 무단전재 및 재배포 금지