In January 2016, Toyota announced the creation of the Toyota Research Institute (TRI), a $1 billion investment in AI to develop autonomous driving capabilities as well as home-care robots. Jim Adler, the first head of data at TRI, has been on the job for just two months. Before that, he was an executive at Metanautix, a data analytics platform that sold to Microsoft last year. Adler talked to TechRepublic about how Toyota is using data and simulation to teach cars to drive themselves.

How did you get started at Toyota?

It sounded like so much fun and interesting, and leveraged quite a bit of my experience. How do you say “no” to working on a self-driving car and a robot? The ten-year-old in me said, “What are you, a fool? Of course you have to do that!”

How do Toyota Research Institute and Toyota Connected fit into the bigger Toyota umbrella, in terms of how they use data?

We’re separate organizations obviously within the larger umbrella organization. Toyota Connected is only about four months old, as well. This is a whole new approach and yes, of course, there have been data centers at Toyota–but nothing at this scale. This is all new.

From a technology and mission perspective, TRI is focused on the research and development of autonomous driving and robots. Our data needs are really in the service of those R&D efforts and the Toyota Connected efforts are more customer-facing. Of course we’re working really closely together, because, at these scales, requiring these speeds, it is a formidable challenge.

SEE: 10 big data insiders to follow on Twitter (TechRepublic)

Toyota Connected, which will be based in Plano, TX, will also house the Toyota Global Big Data Center. That will be sort of the big data hub, globally.

How are the branches starting to work together and approach data?

I always think in terms of customers. From a data perspective, at TRI, those customers are the researchers and engineers that are doing the development of this technology, this autonomous driving and robotics technology. If you think about data, there are at least three different areas that are important. One is the technology, which tends to get a lot of the attention. But, there’s also data governance and, of course, data policy. Those three elements need to be considered as one.

The researchers really care about the technology and the governance, but the company really cares about the policy–that we’re good stewards of the data and that we’re in compliance with good security and privacy standards. So, it is important that we actually know what data we have, the parties that are using that data, and that they’re using it in responsible ways. I’ve been splitting my strategy and architecture among those three areas.

What kind of data is Toyota collecting right now?

There’s on-vehicle data, received from sensors on the car. There are cameras, radar, and possibly LIDAR. There’s inertial sensors that are measuring acceleration and rotational velocities, position, etc. All that really helps inform our autonomous driving goals.

We want off-vehicle data as well. We want to understand traffic patterns and what the roads look like. In autonomous driving, there’s this perception-mapping trade-off. If you think about it, as humans, we don’t really have maps. We have amazing perception. We don’t need to know where the street signs are, where the lane markings are. We can see them. So, we can be dropped anywhere, and we can just drive.

Autonomous vehicle systems focus quite a bit on really good maps, especially when the sensor systems aren’t as good. As the sensor systems get better and are more affordable, there may be more of a leaning on perception.

TRI looks at robots as well as autonomous car research. Are you splitting your time equally between those two or do you focus on the driving part?

From a data perspective, I don’t care that much. The data feeds all of the efforts. The scale is going to be the same. I think cars are clearly an early priority, just because we’re Toyota. Robots are certainly in the plans, they’re just not as far along as the auto efforts.

In a way, an autonomous car and a robot are the same thing. They both are going to rely heavily on artificial intelligence. One of them inhabits the outdoor environment. Let’s say driving across town. The other one needs to deal autonomously with the interior environment, but in many ways they’re very similar. Cars are the robots with wheels.

You mentioned the kind of data from cars. What kind of information do you get from the robots working with people?

Some of it is similar. You want to know what your environment looks like. You want to understand what the robot is doing right now; its position in space. You also need to figure out how robots might act socially within a close space with humans. Cars act socially, too. Pedestrians in San Francisco rule the traffic scene, but in Amsterdam bikers run the traffic scene. In Iran, it’s very much an interplay of pedestrians and automobiles, where pedestrians wind their way through traffic with subtle cues from the driver about when they might go or when they shouldn’t go. It’s very fluid, almost like ballet. Social interaction is well-choreographed and understood.

SEE: Going Deep on Big Data (ZDNet)

So, whether it’s a car in traffic with pedestrians or it’s a robot interacting with people in the home, those kinds of social cues and responses need to be learned and understood. First, they need to be perceived, which is an issue for sensors and the perception stack that deals with those sensors. Then, it’s how to derive policy on what the robot should do. In the case of a car, should that car let the pedestrian go or is the pedestrian telling the car to go? How are those cues perceived? It may be much more subtle in the home than it is on the road, but there are many commonalities.

Tesla’s approach to autonomous driving is incremental, releasing upgrades all the time, and gathering a lot of real world data. What is Toyota’s?

First of all, driving is a tough problem. Self-driving is a tough problem. That’s because humans are very good drivers. If you look at the fatality rate, it’s like one death in 100 million miles driven. That’s humbling. We need to learn from drivers.

If you look at how machines are trained, they’re not trained by rules. They’re trained by example. We need to gather examples. Drivers are a good place to gather those examples. We have efforts going on to gather that data in accordance with good privacy and security policies. Toyota, to its credit, has been doing a lot of driving tests that have gathered quite a bit of data over the years. We’re pulling a lot of that data together for some of TRI’s uses.

What will be the first autonomous-driving feature you see Toyota putting on the road?

There are a lot of driver-assistance technologies that are being delivered in the marketplace all the time. They’re making cars safer. And then there’s the level four, fully- autonomous vehicle that many are working on, including us at Toyota. It could be quite incremental, where we see cars getting safer and safer.

To be honest, it’s hard to believe that we still have blind spots in cars. I was driving a winding road coming back from Santa Cruz last week, and I was thinking, “It would really be nice to have speed-limiting on these kinds of curves.” I don’t want to guess on whether I’m under-driving or over-driving the road conditions. Under-driving angers the drivers around you. Overdriving is a safety concern. It would be nice for the car to have my back, go as fast as possible, but not faster than possible. Driver assistance technologies, like blindspot protection and lane-centering assistance, are getting deployed every day, within Toyota and the industry. That’s all great. Then, at some point, things are going to tip and it’s going to be “hey, we now have level four autonomy.” I think that we’re on that road, so to speak.

But this is going to be incremental. We refer to it as higher and higher levels of driver-assist. Once you start getting into that area of level three or level four autonomy, I think it’s safe to assume that there would be some early conditional fully-autonomous driving. In other words, the vehicle would be able to be fully autonomous for certain special highways in perfect weather conditions. Getting to perfection is not easy. To be better than a human driver, you need to be pretty much perfect– and that’s a long way out.

When would Toyota be ready to say, “Okay these cars can go out on the road?”

This gets to this incremental point. If you’re knocking out areas that have traditionally been dangerous and they’re no longer dangerous, I think that’s quantifiable. I’m a data guy. I like things you can quantify. If you take the roughly 35,000 deaths that happen in the United States every year and you make a histogram of what conditions incur the most fatalities and you start to focus your safety technology on those areas of highest concern, you can start saving lives very soon in a way that you can quantify. If you reduce the number of fatalities by an order of magnitude, that’s significant. I’d certainly be proud of that.

SEE: Job description: Big data modeler (Tech Pro Research)

How are you using machine learning, and what are the challenges?

We’re trying to understand the environment we’re in. You want to understand where the road surface is. You want to understand where pedestrians are. You have a video feed of the area or you might have a LIDAR feed up on a cloud that identifies, to the human eye, pedestrians, bikes, other vehicles, where the road surface is. You want to take that raw feed of video or LIDAR. LIDAR’s nice because it works when it’s dark. You don’t need headlights. Then, you have to identify the objects in that feed. That’s a machine learning problem.

How do you do that? Well the first thing you do is label. As I said, machines learn by example, not by rules. So, the first thing you do is take a video feed and have humans label it to say, for example, “This is a pedestrian and this is a bike, this is another vehicle, and this is the road surface.” Then you feed many, many different examples to these machine learners. The result is a trained model where video comes in and objects identified in the scene come out. That’s one component.

And then there’s driving policy. Car controls are pretty simple. There’s brake, gas, and steering. An autonomous driver takes what’s in the scene and asks: What should I do with those objects in the scene? Is the pedestrian on a sidewalk next to the vehicle? What is going to influence the driving policy and what’s not going to influence the driving policy? These things are also learned.

What’s fascinating and a research problem is what driving policies should be learned in a certain locale. You want to drive like the average driver in a certain area. If you drive in New York as though you’re from rural Kansas, you’re going to stick out like a sore thumb and vice versa. You want any autonomous driving system to not draw attention to itself — just be part of the flow of traffic. That’s a subtle machine learning problem. You might do the same thing for other locales. Driving in Amsterdam is going to be different than driving in Tehran. Toyota is a global company. We have to teach our cars to drive everywhere.

What are the biggest challenges with the data that you’ve come across so far?

Well, I think if you look at any organization, data is like water. When it’s in a glass, it’s very manageable, but when it’s in a flood, it’s overwhelming. Many organizations don’t really understand that data has mass. For example, if you look at Toyota’s market share, just new Toyota cars could throw off a million petabytes of data per year, which is a huge data volume. We’re talking about a flood of data. How do you grapple with that? How do you deal with the technology to deal with that? The key, having dealt with big data systems in the past, is the technology is never done. It’s constantly being innovated. You have to build your data architecture so that you can iterate your technology underneath the applications that use that data.

The key is to avoid “breaking changes” in the application interface, or API. If those APIs don’t break, you can advance the technology underneath. That’s a hard problem and requires quite a bit of thought upfront, but it is vital to make this data usable over time. The automotive industry has been here for tens of decades. You have to think long-term. Data collected five years ago might have some really interesting driving scenarios that shouldn’t be forgotten. You want to make sure that data is usable and can be used for regression tests going back tens of decades. It’s a daunting, but not unsolvable, problem.

One figure that stands out to me with data, is that Tesla says they have 780 million miles of driving data. Does Toyota have a similar figure?

We haven’t gone public with any of that. But, we believe that in order to get to the point where we can develop a truly autonomous car, we’re going to have to simulate a lot. You can’t just test physically on the road. Simulation is going to be a big part of our program going forward.

Just because an autonomous car is driven many millions of miles, it doesn’t mean it is safer if all those miles are on sunny days with nice clean roads. The reason simulations are so important is that you can simulate really dangerous conditions that are the “black swan” events you need to design for to make the car truly safe.

I would hate to see that the acceptance criteria just boils down to, “How many miles has an autonomous car driven?” The Google car has driven millions of miles, but a lot have been on I-280 or around Palo Alto. The real question is: What conditions has it been driven in?

Also see