Ad block

Startling Views from CES

By David Benjamin

LAS VEGAS — Negotiating the CES show floor, bumping along among throngs of convention-goers, and weaving between display booths large and small is no walk in the park. But it rewards the visitor with sights, sounds, and the occasional human encounter that can be mystifying, gratifying, amusing. For a photographer more interested in startling images than in the minutiae of technology, this hectic stroll can be fun. Here’s a collection of shots derived from an hour or so among several exhibition halls at the Las Vegas Convention Center.

Quantum computing comes to CES

IBM chose CES to unveil IBM Q System One. IBM calls its Q System One “the world’s first integrated universal approximate quantum computing system.”

But here’s the thing: How do you explain quantum computing to the CES crowd? After all, the system is designed for scientific and commercial use.

The Big Blue did its best to describe quantum computing in the context of future applications. The potential use cases listed by IBM include: “finding new ways to model financial data and isolating key global risk factors to make better investments, or finding the optimal path across global systems for ultra-efficient logistics and optimizing fleet operations for deliveries.”

Next page: Le Mans comes to CES

CES 2020 Day 3 Recap

BRIAN SANTO: I’m Brian Santo, EE Times Editor in Chief, and you’re listening to EE Times on Air. This is day three of our special series of podcasts reporting live from the Consumer Electronics Show in the Mojave Desert. In the past couple of years, the automotive industry has dominated CES, and this year it’s happening again.

In today’s episode:

Qualcomm made some headline news, announcing it is burrowing deeper into the automotive market. We’ve got an interview with the Qualcomm vice president in charge of the company’s automotive operations.

…did we mention that Qualcomm is getting deeper into the automotive market? One of our colleagues has test-driven a new Qualcomm-powered autonomous vehicle. We’ll report on that.

…also, live interviews with executives from Infineon and with Texas Instruments about adding autonomous functions to cars equipped with driver assist capabilities, also referred to as ADAS.

…an analysis of a novel approach for autonomous vehicles from Intel’s MobilEye unit…

…and finally, Toyota surprised show-goers expecting the company to talk about cars. The company skipped right past that subject and on to planning entire smart cities.

Some of the biggest news in automotive at this year’s CES was Qualcomm’s formal announcement of its “Snapdragon Ride Platform.” The new platform got instant credibility with the announcement that GM is Qualcomm’s new partner on assisted driving technology.

Qualcomm long ago established itself as a supplier for connectivity and infotainment systems in the automotive sector. The market for vehicle automation seemed like an obvious next step, and investors have been wondering if the company was going to take that step.

Qualcomm put an end to that question by rolling out the new Ride Platform, which it described as “scalable.” By mentioning that GM is now working with Qualcomm on ADAS, Qualcomm is also hoping to let the world that the world’s largest mobile chip company has a foot in the door in the ADAS market.

Junko caught up with Nakul Duggal, Qualcomm’s Senior Vice President & Head of Automotive Product & Program Management, right after the company’s press conference in Las Vegas. She asked him to break down what exactly the Snapdragon Ride Platform entails, what Qualcomm means by “scalability,” and whether the relationship with GM will have any impact on GM’s partnership with Cruise, which has a focus on fully autonomous driving.

NAKUL DUGGAL: We look at the automotive business in four ways. That is the telematics business and the CD direct business, which we’ve been in for a long time. That is a business that we understand quite well. We started the Snapdragon digital cockpit business about five years ago. And that is doing exceptionally well. Between the telematics and the digital cockpit business, we now have over $7 billion in our design pipeline. We have over 19 automakers. We are leading in many of these areas. So that business is going really well.

We announced two new areas to our portfolio today. The first one was the Snapdragon Ride platform, which is an autonomous driving– an ADAS– platform. It includes an SOC, an accelerator and the autonomy stack. And this is an area we’ve been working for a number of years, and we announced it finally. What is going to differentiate us in this space compared to competitors is the scalability. So we will address everything from Level 1 to Level 5. The efficiency of the platform that we are building from a power perspective, so these are much more efficient compared to competing solutions. And then the stack is a reference stack. You can use our stack, you can use components of the stack, or you can bring your own stack. If you don’t want to use ours, our platforms on the SOC and X-ray are completely open. So this scalability approach, to us, is a big differentiator. We saw a lot of success when we did the digital cockpit business with scalability. And General Motors has announced a partnership with us across all three of these domains: across telematics, infotainment and ADAS. We are very proud to see that relationship move forward.

And then finally, we announced a new service called cloud to cloud, which is essentially designed to be able to manage the vehicle from an automaker’s perspective often it has been deployed. So managing in terms of being able to make updates to the car, being able to unlock new capabilities, make flexible configurations so you don’t have to change the hardware. The only way that you change it is actually through the updates in software.

So these four areas I think put us in a very compelling position to have a very highly differentiated portfolio for our automaker and Tier One customers.

JUNKO YOSHIDA: All right. You know, keeping this conversation high level, what I’m gathering from various players in the automotive chip market is that, in the past, or even at present, a lot of car companies have been making what they call “band-aid” solutions. The new features coming in, they do a chip solution from one chip company and then another new feature comes in, Okay, we’re going to switch to another company or another Tier 1 who can provide us some new solutions. So this has been very fraught with so many variabilities. How do you think that your Snapdragon Ride platform can answer that question?

NAKUL DUGGAL: One thing that we have always done different is that, when we engage with customers, we don’t engage transitionally for one generation. We look at the requirements for the business that we are getting in across the board, for every Tier, and then we design our portfolio accordingly. With the Ride platform, very similar to the cockpit platform or the telematics platform. We understood the requirements all the way from the entry to the most advanced systems. And we have built the portfolio to be very scalable. We start from 30 tops that could address Level 1 requirements, all the way up to 700 tops for Level 5 and even beyond, if you need to get there. So that we understand that there is no reason for the automaker to fragment their partner base, their supply chain base, to different suppliers for different Tiers. And the reason so far automakers have had to deal with this is because they don’t find scalable platforms that address the needs at the right cost point, at the right power efficiency point, and really in terms of being able to deliver that scalability. These are very difficult solutions to deliver through any one supplier across the board. At Qualcomm, our strategy has been to be scalable always, and that I think is what is going to get away from what you define as the “band-aids” that have been so far implemented.

JUNKO YOSHIDA: All right. Well, let me ask you this: You did mention your new relationship with GM. I mean, you’ve always had a relationship with GM, but you are now expanding the relationship all the way to ADAS. How does that fare with the GM strategy to work with… You know, they’ve been working with Cruise on the autonomous vehicles, and if indeed the Snapdragon Ride platform can address all the way to 4+, Level 4+ or 5, what’s going to happen?

NAKUL DUGGAL: So let me make sure that we explain this quite clearly. The partnership that we announced at GM, we’ve had a partnership on telematics for a long period of time. We announced a partnership on the digital cockpit SOCs, and now also on the ADAS SOCs. The stack that are introducing for the Ride platform, that is a Level 2 stack. Level 2, Level 2+. And the GM partnership is really around making sure that we have the ability to work with a partner like GM across domains like telematics, but now also new domains like infotainment, as well as ADAS, to essentially get around some of the challenges that you were mentioning earlier, where you have to keep switching from one generation to another. Here our relationship is based upon the fact that we have scalable platforms. We deliver a level of capability and service to our customers, and as they have seen that we are a supplier that can be trusted, that they can rely on, that is meeting their commercial, their technical, their power requirements, they’re looking at our platform, and it’s something that makes sense. I cannot comment on the other programs that GM is obviously working on.

JUNKO YOSHIDA: All right. As a Qualcomm, when do you make Level 3, Level 4 stacks available to your potential customers?

NAKUL DUGGAL: So we have the Level 2+ stack available now, actually. And we are going to be continuously making updates to these Level 2 stacks, which will, in our belief, evolve to more capable stacks. We get asked the question, Do you support sensors like lidar, etc? We can support all sensors. The stack is capable of supporting all sensors.

We also keep in mind very carefully the cost of these sensors. So as the sensors get to a cost point where they become broadly available, we are happy to add any sensors. As far as when we get to the next level of stack capability, I think it will be a progression. We believe that the stack that we are delivering today is very compelling, and automakers the Tier Ones who were interested in it and start to work on it, and we will work with them to evolve the capability along with their own teams.

JUNKO YOSHIDA: Does your platform actually include some of the network capabilities? In-vehicle network, we’re talking about bringing in more data inside a vehicle. We’re talking about whole car, software upgrades, so on and so forth. You probably need a PCIE, all the gigabit ethernet kind of stuff. What sort of network processors are part of your Snapdragon Ride Platform?

NAKUL DUGGAL: So we support ethernet. We support PCIE on the Snapdragon Ride. We support all of the updates for all of these platforms. So really anything that is needed to run the platform on the automotive bus for local internet working, all of that is available. And then of course on top of that, any software that is needed, any drivers that are needed, any update solutions, any auto solutions, all of those are included as part of the platform.

JUNKO YOSHIDA: Okay. Very good. Thank you so much.

NAKUL DUGGAL: Thank you.

BRIAN SANTO: As if working with GM isn’t enough, Qualcomm burnished its ADAS bona fides by bringing its highly autonomous vehicle to Vegas. Jim McGregor, our friend at Tirias Research, got a chance to ride on it. And this is what Jim told us:

He said, “The ride was on a highway in Vegas. The car did well at merging, navigation and even avoiding an aggressive Camaro that cut us off and almost spun out. Overall, it was a comfortable ride, and the system appeared to operate very well,” he said.

Of course, other chip giants in the autonomous vehicle segment – such as Nvidia and Intel/MobilEye – had already been there and done that. The market is getting crowded.

Leading up to CES 2020, veteran automotive chip suppliers such as NXP Semiconductors and Texas Instruments also announced new chips. Both claim their respective product families will help OEMs and Tier Ones to modernize the current vehicle architecture. The goal is to enable car companies to do over-the-air upgrades, for example, to become software-upgradeable vehicles a la Tesla.

NXP calls its chip the Vehicle Network Processor. TI also announced a similar gateway processor and ADAS processor. Both companies are known for their grounded views on highly autonomous vehicles.

Junko sat down with TI on the first day of the official opening of the CES show floor. She asked TI what it takes to be a trusted chip supplier to the automotive industry.

The first voice from TI you hear will be Sameer Wasson, vice president and general manager of TI’s processor business unit. The other voice belongs to Curt Moore, general manager and product line manager for the company’s Jacinto processors.

JUNKO YOSHIDA: What are the basic, unique requirements to be in the automotive market as a chip company?

SAMEER WASSON: You’ve got to understand the need from a safety, reliability, longevity perspective. Bringing that together, expressing that in semiconductors, understanding the system, getting to those unique corner cases, those problems, that makes automotive very unique. And transferring some technology from a different market over here without the correct part, without the correct architecture, without the correct time over target and R&D, often leads to bad results.

JUNKO YOSHIDA: Let me inject myself. Talk about bad results! I mean, when we… Sameer you and I were talking about… you talked about this industry, meaning automotive industry, has a long memory. Tell us a little bit about that.

SAMEER WASSON: It has a long memory because this technology stays in vehicles for a very long time.

JUNKO YOSHIDA: “A long time” like 15 years?

SAMEER WASSON: Easy. Things will change. Some components change faster, some components will change slower. But there is technology in cars which stays there for a very long time. So when you’re thinking of it, a mistake done in one step can impact you for multiple years. And that mistake doesn’t necessarily have to be catastrophic. It could be a poor systems choice. It could be a sub-optimized system where the cost is not really where it needs to be, and that prevents it from scaling to where it needs to go and be accessible for everyone. So our mission in life is to make sure that when you’re making this technology, it does something which is scalable, accessible and no make any of these sub-optimum decisions which prevent us from getting to that mission.

JUNKO YOSHIDA: And they are unforgiven. Once you make a mistake, can you crawl back into the same OEM?

SAMEER WASSON: It’s not easy.

JUNKO YOSHIDA: Not easy.

SAMEER WASSON: It’s not easy.

CURT MOORE: And I think, as we’re looking… as the automotive market is changing, as now processors are moving into more and more mission-critical applications inside the car, this is becoming more and more important to really be thinking about the system and how you implement the safety for these mission-difficult systems and not retrofitting something.

JUNKO YOSHIDA: Not retrofitting it. You know, one of the things… I think Sameer, you were the one who mentioned that there are two schools of thought. Getting into this autonomy business. I mean, we’re talking about the Level 2 cars to the Level 4+ or Level 5 cars. There are two schools of thoughts. One is sort of a top-down approach from a bottom-up approach. Tell us the difference and where TI sits in that spectrum.

SAMEER WASSON: We definitely… The difference is, you can take a leap of faith and go build a general purpose computer and say, Let me just brute force and get to a high level of autonomy. We are not doing that. We are taking a more nuanced approach, using a heterogeneous architecture. So that we are using different components of processing to go solve specific problems. So if you need general purpose compute, we have general purpose compute. If you need specific vision accelerators, because they give you the lowest power, we’ve invested in that. If you want to do machine learning and signal processing, we’ve invested in that. So we definitely believe in a more bottom-up approach of building that technology and then letting it scale to as high as you want.

Bottom line is, safety is paramount. To make safety happen, you’ve got to have the technology accessible for the masses. And our technology enables you to do that because we don’t need fancy cooling, for example. We don’t need anything which requires you to then go say, How do I architect this differently because my machine is just bigger. We scale from really small to really big. But really big is important. But being able to scale is more important.

JUNKO YOSHIDA: Let me be a devil’s advocate. Some people talk about really there’s a beauty to the top-down approach because you, as you mentioned, the OEMs always want the next shiny object. So this year this might be focused on certain things, next year they’ll focus on something different, whether it’s a heads-up display or the audio thing or whatever. So the chip companies are put in a position to provide one solution at a time, which automotive companies want, but at the same time, that could end up (according to the top-down advocate) is that it’s going to be a band-aid solution. Band-aid after band-aid. So you eventually get a really unwieldy platform that you can’t really… The decision you made this year, five years from now could have been the wrong decision. What do you say to that?

CURT MOORE: I think the key thing that we’re really looking at is, How do we deploy ADAS to the masses? Deploying ADAS to the masses has a huge element of cost associated with it, a huge cost component to it. If you just start from the high end down, you’re not going to be able to go hit those costs points or enable those system cost points to allow car OEs to really deploy these ADAS features to their entire fleet. And that’s really where we can go impact safety, is by getting these ADAS features into lower-cost cars. Because when you think about it, those are going to be the largest number of cars often driven by people with less experience. You know, younger drivers, etc. And that ADAS functionality can really have a big safety impact. So by attacking those low-cost systems and getting that ADAS functionality and worry about it from a cost point of view is going to make the road safer.

SAMEER WASSON: To your point, top-down has its advantages if you’re looking at scaling to the top end of that market. Absolutely. What our strategy is, we want to get all of the market. And scaling up sometimes is easier than scaling down. Scaling up means you’re putting two of our solutions, and they’re software-compatible and the scale.

JUNKO YOSHIDA: I see.

SAMEER WASSON: Scaling down, once you have that infrastructure built into the chip, scaling down comes at the cost of either cost or power. Simple. It’s physics, right? So we have taken the approach of saying, Let’s take a level block approach to this and build it up.

The other thing is, no two OEMs are the same.

JUNKO YOSHIDA: Ah! That’s a good…

SAMEER WASSON: No two car lines in the OEM is the same. So there could be… Think of a very high end OEM. They make really, really good cars. But they have some scale. They go from your economy car to the highest end car. Your economy car may only have one SOC. And that is your front camera, that is your centralized compute sensor fusion, that is your automated parking. How do you get it to do that in that price point? It’s a very different price point.  Secondly, that power. They don’t have the budget to port fully. The same OEM may actually have that very, very high end vehicle. Now anyone catering to them, whether it is us or whether it is a Tier One, that software investment is the most expensive one. So now, we’ve got to come up… and that’s what, quite frankly, makes our job fun. How do you tackle this puzzle? And our strategy is, let’s go build level blocks. Let’s go build level blocks that, when put together, can get you the highest level of performance which the OEM needs. But make sure you’re catering to each of those segments in a thoughtful manner.

CURT MOORE: And the ones that are going to be most cost sensitive are going to be the lower-end vehicles. So there’s where we want to make sure that we optimize system bom, because that’s the one that’s going to be critical.

JUNKO YOSHIDA: Very good. Thank you so much.

BRIAN SANTO: In yesterday’s podcast, Junko interviewed NXP’s CTO Lars Reger about Ultra-Wide Band technology. She also asked him how NXP would compete with new entrants to the autonomous vehicle market. Reger’s response was, “We think we can completely complement each other.”

Recall, too, that Qualcomm was once planning to acquire NXP in hopes of expanding its business into the automotive market. The deal did not get consummated, but Reger said, “NXP can work with a number of other chip companies such as Nvidia, Karlay or Qualcomm. Their primary focus is on AI. We,” Reger said, “we’ve got the rest of the solutions ranging from vehicle networks to security/safety.”

Next, Junko sat down with Peter Schiefer, Infineon’s division president responsible for automotive. The various ways to implement autonomy in vehicles actually occupies a spectrum. If robotaxis are at one end of the spectrum, Schiefer sees a growing trend that carmakers will be bringing down “some use cases” of Level 3 and Level 4 cars into what used to be more run-of-the-mill ADAS vehicles.

JUNKO YOSHIDA: All right. I’m here with Peter at the Infineon booth, and we were just talking about how this whole autonomous vehicle market we’ve been hearing every year at the CES. And CES 2020 we see the air has changed. I mean, thank goodness all the full-fledged hype of autonomous vehicles has kind of subsided, right? And we’re talking about two different approaches for the autonomy. Explain how you see the different industry players approaching autonomy differently.

PETER SCHIEFER: Yeah, basically what I see is that the market is building up in two approaches, where one approach is more for the fully automized kind of Level 5 cars which will be starting from my perspective mainly in the commercial use case, where you can, for example, replace drivers. And with that you can also justify the technology which is needed in order to enable such a function. However, this number of cars will be limited to that commercial use cases. Then on the other hand side, what I see now with the complexity and complication about approvals of a Level 3 car, I see a lot of companies now taking single uses cases out of Level 3, Level 4 car and pull them into a Level 2 car. There is then an incremental technology… technology which is needed incrementally to add to the function, delivers a benefit to the car owner.

JUNKO YOSHIDA: Okay. Give me some specific examples of use cases of a Level 3 and Level 4.

PETER SCHIEFER: That use case could be, for example, a parking support. So if you want to park the car, then this parking function can be one. The second one could be a highway pilot.

JUNKO YOSHIDA: Oh, okay. So it’s totally autonomous driving.

PETER SCHIEFER: Yes. For a certain situation, for a certain period of time.

JUNKO YOSHIDA: Okay. And these did not used to be considered as part of Level 2, right?

PETER SCHIEFER: Right. In the classical definition, this was not considered. And I see it changing now, and you may even call it a Level 2+ when you add these kinds of use cases. And here it’s a lot about the sensors and also the dependable compute power to enable these functions.

JUNKO YOSHDA: All right. We’re talking about the general trend in the automotive industry. Where does Infineon sit? How do you enable it?

PETER SCHIEFER: Basically, in the autonomous car you are replacing the eyes, the ears and the brain of the human driver. And that’s why the semiconductor technologies are enabling that. And it’s all about very precise sensing. And we are a leader in the 77 gigahertz routing system. It’s about dependable compute power. And this is all about our functional safe microcontrollers. And as these cars are connected to the outside world, it’s also about cyber security. There is no safety without cyber security and defining hardware incas for protection against cyber security key. And this is a key at Infineon.

JUNKO YOSHIDA: Tell me a little about dependable compute power. What’s the downside? And how do you mitigate it?

PETER SCHIEFER: On the one hand, most people when they talk about autonomous cars, are talking about the big number cruncher. But there’s more than that. You need to have a fully fail operational and failsafe system. That’s why you need to have a functional safe microcontroller next to the number cruncher. And this is what Infineon provides. But it’s more than that. The whole system needs to be dependable. So for example, if you are driving very fast on a highway and the power supply gets disconnected, then the computer cannot calculate. So therefore, a very reliable and dependable power supply is key for that overall safety and security of the system.

JUNKO YOSHIDA: All right. Very good. Thank you so much.

BRIAN SANTO: Of course, for many of us covering the automotive sector at every CES, we don’t feel complete if we don’t attend MobilEye’s press conference. It’s traditionally a one-hour lecture on the latest technology advancements from Professor Amno Shashua, president and CEO of Mobileye, which is now an Intel company.

At a time when most automotive companies have been striving to achieve “redundancy” by fusing data from a variety of sensors – vision, radar, lidar and others – Mobileye this year discussed a way to create redundancy by using only photographic cameras, but running the incoming data through different types of neural networks.

Junko got help from Phil Magney, founder and principal at VSI Labs, to break down Mobileye’s new proposal.

JUNKO YOSHIDA: We just came out of Mobileye’s press conference, and Phil and I were talking about, it’s almost like having been in a classroom for one hour! So what was your biggest takeaway, Phil?

PHIL MAGNEY: Well, it’s hard to come out of a press event like that and not feel impressed. And I feel like it’s very authentic, and I feel like there’s a lot of science, a lot of very pragmatic information presented in this announcement, in this release.

JUNKO YOSHIDA: That’s true. It was less fluff, less marketing one-liners, but it’s more about substance. Okay, let’s break it down because there are a few things that surprised us, right? One thing was Professor Sashua talking about the use of cameras can actually go a long way in terms of driving towards autonomy. Tell me a little bit about what you saw in terms of their heavy use of cameras. How it has been involving.

PHIL MAGNEY: Yeah. I completely agree with you. Obviously, it’s a very camera-centric approach. It uses many cameras. But what I like about it is, we’re starting to see a diversification of AI algorithms used to go after a problem in multiple different ways. And that’s how you’re able to create a little bit of redundancy through the camera solution. So it was pretty impressive that even though they are still using cameras, through a couple of different neural networks they’re able to really kind of simulate what you could do with lidar.

JUNKO YOSHIDA: Yeah, some of the pictures they showed were interesting. So one stream they use a heavy use of cameras. But there is a separate stream if the companies choose to do so, they can bring in radar and lidar, and that could also provide the redundancy. Is that what they said?

PHIL MAGNEY: Yeah. I think basically it’s going to be up the customer, really. What’s the customer going to be comfortable with? I think the fact of the matter is, I think that you can do a terrific job of creating Level 2+ automation with a camera-centric solution. I think the proof is there that it can be done. Obviously another company that’s very successful with that is Tesla. But honestly, not every OEM is going to be 100% with that, and so they’re going to still probably want to use certainly radar and possibly even lidar as well if the prices come down.

JUNKO YOSHIDA: Let’s talk a little bit about a very impressive YouTube video he shared with the audience today. He was talking about driving in Jerusalem is Boston Plus, which sounds pretty deadly to me. But tell me, What did we see in that YouTube? What was so impressive about it?

PHIL MAGNEY: Well, I think basically they were showing that the ability to be agile when they are faced with a lot of situations… Like in that video, it’s showing a tandem bus, which is very, very big. So it occludes a lot. So you have to take that into consideration. And they’re showing how it’s coping with the pedestrian. But it’s also showing a little bit of assertiveness as far as its ability to be agile, because this is on the fact that if you are overly cautious with your AV stack, you’re going to be painfully slow. And that’s going to be bad for everyone.

JUNKO YOSHIDA: Who would pay for a robotaxi when it’s too slow to get there, right?

PHIL MAGNEY: Exactly. Exactly. Time is money and valuable, rather.

JUNKO YOSHIDA: So what was the timeline that Mobileye talked about today? He was talking about that we won’t get to the consumer AV until we nail down the robotaxi, right?

PHIL MAGNEY: One thing he made perfectly clear is that the robotaxi is really going to be coming in as a transportation, as a service, rather than any kind of consumer Level 4 or 4+ vehicles. It’s going to be led by a handful of companies that are going to deploy it as a commercial business, which has always kind of been my expectation as well. Again, I can’t recall the name off the top of my head, but they’re working with several companies on that and some new ones. Neil in China. Exactly. I think they’re making great progress.

And then of course the other segment is the evolution of ADAS into Level 2+, and they seem to have a very solid plan for that and a lot of customers lined up and programs in place for that, too.

JUNKO YOSHIDA: All right. Very good. Thank you so much.

BRIAN SANTO: During the briefing, Shashua talked about the importance of executing on the robotaxi business. He claimed that a fully autonomous “commercial” robotaxi today could cost between $10,000 to $15,000. But with enough experience and insights gained from the robotaxi business and Mobileye’s own development of new hardware, a “consumer” Automated Vehicle might be possible at less than– get this– $5,000 by 2025, he said.

Toyota traditionally has a splashy presentation every year at CES, and it traditionally talks about cars. But things change. Frequent EE Times contributor David Benjamin – Benji – attended the presentation.

So, Benji, you walked into a press conference expecting to hear all about cars and a smart cities press conference broke out. Is that right?

DAVID BENJAMIN: Yeah. It was the Toyota press conference. Akio Toyota, the chairman of Toyota, was the speaker. And in the last few years, of course, Toyota presented concept cars and talked about automated driving. And usually featured Gil Pratt, which is one of the really smart guys talking about autonomous driving. And one of the few who warned that it’s going to be a longer haul to get to the Level 5 family car than the industry really wants. But instead, Toyota San talked about building a woven city, which would be a smart city that interweaves all the elements of artificial intelligence from scratch in the area around Mt. Fuji in Japan, close to where my wife’s family lives. And I’m encouraging them to buy up real estate.

BRIAN SANTO: Good advice. So did he present this as a concept community? Or is Toyota actually building it?

DAVID BENJAMIN: That’s a good question. He introduced Bjarke Ingels, who’s head of the Bjarke Ingels Group, BIG, from Copenhagen, who is the architect who has drawn up these spectacular artist’s renderings of the place. I think that, in some respect or another, Toyota must have control over the turf, because they were presenting it as a future reality. But they also said that they’re going to build the whole thing virtually before they build it physically. And they were inviting investors and innovators and anyone inspired by living a better life in the future to contribute to the project. So I think that it’s not necessarily going to come around in the next six months or so.

BRIAN SANTO: So this isn’t exactly a complete and utter departure from vehicle technology and electronic vehicle technology. So they’re going to be talking about a smart city with all that entails– parking meters and home management and that sort of thing– but they’re also talking about some of the infrastructure about delivering things– delivering groceries, delivering people. So it looks like there is a vehicle component to this smart city. Tell us what Toyota discussed about that.

DAVID BENJAMIN: One of the things that Gil Pratt was talking about the last couple of years was the real immediate future for autonomous vehicles. It will be things like delivery vehicles and shuttles and trams that follow a fixed route, that essentially go in a loop from Point A to Point B, back to Point A again. And deliver things, move people around, pick up nannies, drop them off, things like that. And my impression of this woven city is that virtually all the vehicles on the streets where vehicles will be allowed will be these sorts of autonomous, loop-driven vehicles delivering things. I don’t think there will be any parking meters. And I think people’s private cars– assuming they have them– will be parked beneath or around these George Jetson condos that are going to go up. And if you’re commuting someplace else, going to Tokyo or Jigasaki or Yokohama, you’re going to be picking up your car in a garage and leaving a woven city and driving on regular roads. And probably not using an autonomous car.

BRIAN SANTO: We should mention first that this is the initial concept. “City” might be too grand a word to describe it, right?

DAVID BENJAMIN: Well, the initial population is going to be 2,000. So we’re talking about a smart village. Who knows how big it could get? Again, I talked about the fact that it’s close to Mt. Fuji, and if it gets big enough, they’ll probably have to tear down Mt. Fuji and build suburbs.

BRIAN SANTO: Or drill into it, one or the other, right?

DAVID BENJAMIN: Well, if you could get rid of those tourists who go up and down the mountain all the time, it wouldn’t necessarily be a bad thing.

BRIAN SANTO: Thanks, Benji.

And so we conclude our third day of coverage of the 2020 Consumer Electronics Show. This is our final podcast live from CES. We invite you to listen to our first two as well.

Our regularly scheduled podcast is our Weekly Briefing, which we usually present every Friday. Well, not this Friday. We’ll resume our regular schedule the Friday after next.

This podcast is Produced by AspenCore Studio. It was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss. The transcript of this podcast can be found on EETimes.com. Find our podcasts on iTunes, Spotify, Stitcher, Blubrry or the EE Times web site. Signing off from CES in Las Vegas, I’m Brian Santo.

T&M Solutions for Automotive Visions into Reality

By Maurizio Di Paolo Emilio

LAS VEGAS — Keysight is supporting the automotive industry through their latest innovations shown at the Consumer Electronics Show 2020. The technological transformation with the advent of IoT, 5G, and vehicle-to-everything (V2X) communications leads to several challenges that require the implementation of sophisticated test and measurement solutions to maintain automotive safety in the era of autonomous driving.

Next-generation vehicles need to develop different applications in multiple areas such as infotainment, telematics, driver assistance, and autonomous driving with maximum reliability, safety, and privacy.

V2X can be used in many different ways to improve road safety while leveraging the existing smart traffic infrastructure. 5G is a significant challenge for all players in the wireless market. It will take time for it to be fully deployed worldwide and will not be delivered in a single major release — with a significant network deployment program planned for 2020 and beyond.

Engineers are working on autonomous driving technologies to make our roads much safer. Their “weapons of choice” are accurate test and measurement solutions — essential tools to make sure their designs work perfectly. The technology that will enable vehicles to make the leap to standalone driving and V2X platforms (implementing solutions to leverage big data) will improve the driver experience and the transport system as a whole. The standard will also support next-generation infotainment systems, with over-the-air updates and multimedia downloads.

Keysight provides automotive designers and manufacturers with the latest innovations in design and test solutions to help create high-quality, high-performance products while mitigating safety risks with comprehensive solutions for e-mobility charging and interoperability testing, inverter efficiency, radar sensor technology, and safety.

“At CES, we are seeing automotive innovation orders of magnitude smaller than just a few years ago — smaller, more capable electronics with higher levels of integration,” said Jeff Harris, vice president of global marketing at Keysight. “At Keysight, we are excited to see how fast our customers innovate once they have the right design and test capabilities.”

The widespread application of technologies in the automotive market is proliferating. In addition to sensor technology, connection technologies have also seen developments. It all started with elite vehicles, but more and more trucks and utility cars are now using radar sensors, for example, mainly for increased safety and convenience. The solution offered by Keysight Automotive Radar Research and Development offers full coverage at 77/79 GHz, as well as analysis and signal generation over 5 GHz and excellent displayed average noise level (DANL) performance in the industry.

The aim will be first and foremost to have a valid connection between vehicle and vehicle and between vehicle and infrastructure. 5G is ready for this revolution in order to meet the growing challenges required by the market. The 5G Automotive Association (5GAA) consortium is working to define the standards that will govern this revolution.

Keysight is partnering with many leading global wireless regulators, leading companies, and universities to enable the next generation of wireless communication systems. Many solutions allow you to test security systems and various traffic scenarios by correctly testing the vehicle according to the type of driving, the person, and the operating environment. Keysight’s 5G network emulation solutions enable the device ecosystem to simplify workflows, share information, and accelerate time to market.

The data rate of in-vehicle systems continues to increase, and the signal integrity of interconnections dramatically affects system performance. Analyses of interconnection performance in both time and frequency domains are therefore critical to ensure a reliable system.

Toyota Touts AI-Driven Dream City

By David Benjamin

LAS VEGAS — In announcing plans by Toyota Motor Corporation to build a new city in Japan, fueled entirely by renewable resources and operated by an intricate web of artificial intelligence (AI), company chairman Akio Toyoda told a first-day media audience at the Consumer Electronics Show (CES), “You must be thinking, ‘Has this guy has lost his mind?’

“Are we looking at a Japanese version of Willy Wonka?”

Standing small against a towering screen that showed artist’s renderings of a futuristic city of 2,000 people, nestled in the shadow of Mount Fuji, Toyoda might have indeed resembled Willy Wonka, cloistered from the world in his shuttered chocolate factory. But what the Toyota chief did not say was that, in the movie, Willy Wonka was always a step ahead of everyone else.

By leapfrogging Toyota’s focus on AI into smart-city concepts, Toyoda was tacitly confirming the widespread consensus that AI in vehicles — the dream of a fully autonomous family sedan in the immediate future — is stuck in neutral.

Indeed, almost completely absent from Toyoda’s presentation was any mention of automobiles. Toyota concept cars and the Toyota Research Institute — the company’s pride and joy at CES in previous years — were off the agenda.

And the Woven City is going to be th-i-s big. (Image: EE Times)

Mounting safety concerns and technology issues have led to an industry-wide retreat from full autonomy. Instead, carmakers and their technology allies are touting refinements in advanced driver-assistance systems (ADAS) and the development of safety standards that might make autonomous vehicles (AVs) acceptable to a dubious public afraid of seeing their kids run over by Robbie the Robot.

Meanwhile, suggested chairman Toyoda on Monday, why not build George Jetson’s condo?

Toyoda introduced Bjarke Ingels, founder of the Bjarke Ingels Group (BIG), a Copenhagen-based architectural enterprise chosen to partner with Toyota in the creation of a future urban model called a “Woven City” by Toyoda and Ingels. Ingels explained that three forms of mobility would interweave in this city.

Ingels said that the Woven City would have some traditional streets, where pedestrians, bicyclists, cars, and other vehicles share space. But the dominant avenues would be an urban promenade, wide and exclusive to pedestrians, and a “linear park” designed for walking, strolling, picnicking, and — Ingels suggested — a greater level of social interaction than is common in most cities today.

Taking over from Bjarke, Toyoda sketched a vision of high-rise “blocks” surrounded by greenery, each roofed with photovoltaic tiles to convert sunlight to energy. Underground, said Toyoda, there will be a hydrogen power plant to provide additional energy. Toyota, without getting much traction on the idea, has been a leader in promoting hydrogen as a fuel source.

Each unit in the Woven City’s residences, of course, would feature lots of robotics, “with people, buildings, and vehicles all connected. “The key indoor technology would be sensor-based AI that intuits the household’s needs before any human notices, stocking the fridge, adjusting the heat, collecting the trash, monitoring the baby, housebreaking the puppy, and polishing the doorknobs.”

And every unit, he added, “will have a spectacular view of Mount Fuji.”

The streets, said Toyoda, will be filled with automated Toyota vehicles delivering goods; shuttling babies, nannies, and senior citizens; sweeping the streets; and facilitating more human contact — which, said Toyoda, must be the objective of AI in its practical applications.

However, anyone who hears about Akio Toyoda’s “personal field of dreams” and starts hoarding real estate in Kanagawa and Yamanashi prefectures near Mount Fuji will be jumping the gun. All these Woven City plans, cautioned Toyoda, will be worked out first in virtual reality.

He said Toyota is inviting like-minded companies and individuals “and anyone inspired to improve the way we live in the future” to help test the possibilities of the Woven City “in both the virtual and physical worlds.”

The Toyota chairman bowed out without providing a timetable for his dream.

New TI Processors Target ‘Practical’ ADAS

By Junko Yoshida

LAS VEGAS — Texas Instruments is introducing at the Consumer Electronics Show this week ADAS and gateway processors — TDA4VM and DRA829V — built on TI’s latest Jacinto platform and designed to enable mass-market ADAS vehicles.

 

This move underscores the decision by several leading car OEMs to scale back from an original commitment to pioneer fully autonomous vehicles.

 

In a recent interview with EE Times, Curt Moore, general manager and product line manager for Jacinto processors, acknowledged that TI, too, faced the dilemma of “where we want to invest our time” for its next-generation automotive processors. TI’s emphatic answer was to design auto-grade processors that can address “edge, safety, and security” but zero in on “semiconductor affordability and accessibility.”

 

“We wanted to develop automotive processors that are scalable and applicable to a wider set of vehicles, including low-cost and affordable cars for younger drivers and those with low income,” explained Moore.

 

ADAS and gateway processors

TDA4VM processors are for ADAS, while DRA829V processors are developed for gateway systems “meeting with all the plumbing requirements,” noted Moore. They include specialized on-chip accelerators, according to TI, to expedite data-intensive tasks.

 

Both TDA4VM and DRA829V processors also incorporate a functional safety microcontroller so that OEMs and Tier One suppliers can “support both ASIL-D safety-critical tasks and convenience features with one chip,” said TI.

 

Perhaps most importantly, both the ADAS and gateway processors share one software platform. Moore said, “Developers can use the same software base from high-end to low-end vehicles.”

 

Asked about TI’s two new processors, Phil Magney, founder and principal at VSI Labs, told EE Times, “I see them as great companions, as both are necessary to support the latest trends in software-defined architectures. Together, these processors can take care of the heavy processing requirements of automated driving.”

 

Magney explained, “The environmental modeling gets very processor-intensive when you consider all the inputs necessary to support the task in real time. Furthermore, you need the data capacities, timing, and synchronization of all the sensor data. On top of this, you need safety and security, which are built into these chips.”

 

The right level of autonomy?

With the new processors, TI hopes to enable the right level of autonomy in new vehicles.

 

Calling Level 4 and Level 5 cars “still in the development stage,” Moore pointed out “corner cases” that fully autonomous vehicles have yet to solve and “well-defined use cases” [and operational design domains] that must be spelled out for higher-level autonomous vehicles. Given these challenges to full autonomy, Moore said, “This will be a slow journey” from the current Level 2 and Level 2+ vehicles.

 

TI, however, isn’t swearing off of higher-level ADAS functions. Indeed, TI’s TDA4VM is designed to achieve much better visibility at speeds necessary for on-chip analytics.

 

Specifically, the TDA4VM supports high-resolution 8-megapixel (MP) cameras that see farther, possibly even in fog and rain. TDA4VM processors can also simultaneously operate four to six 3-MP cameras.

 

Sameer Wasson, vice president and general manager of TI’s processor business unit, told EE Times that the new ADAS processors are also capable of fusing other sensors — including radar, LiDAR, and ultrasonic. “Our goal is to enable carmakers and Tier Ones to develop scalable but practical cars.”

TI’s new ADAS processor TDA4VM is not only highly integrated but also capable of fusing a variety of sensory data. (Source: TI)

 

Magney believes that the TDA4VM is scalable in the sense that it can “handle full 360° situational awareness for high-end ADAS or automated driving applications.” 

 

Beyond the ADAS processor’s ability to efficiently manage multilevel processing in real time, the key is that it can do the job within the system’s power budget. “Our new processors execute high-performance ADAS operations using just 5 to 20 W of power, eliminating the need for active cooling,” TI claimed.

 

Deep learning

TI also claimed that the latest Jacinto platform brings enhanced deep-learning capabilities. Noting that the platform offers full programmability, Moore said, if OEMs or Tier Ones plan to set up their own vision/camera/sensor fusion, the SoC allows their own perception.

 

A few analysts, however, are frustrated with the scant details that TI has provided for its ADAS processors. “Now TI says the TDA4VM can handle deep learning, but they don’t disclose any specs or details, let alone its performance,” said Mike Demler, a senior analyst at The Linley Group. Asked how TDA4VM might fare against Intel/Mobileye’s EyeQ chips, he said, “Now TI mentions AEB [automatic emergency braking] and self-parking, which require at least [Mobileye’s] EyeQ3 capabilities. But again, how much performance? We don’t know.”

 

VSI Labs’s Magney also noted that it won’t be easy to compare TDA4VM with Mobileye’s EyeQ chips. He noted, “Mobileye’s tight integration of processor and algorithms makes them a strong incumbent in the field.” TI’s edge might be that “as the industry moves from ADAS to automated driving, OEMs will desire more freedom to develop their own algorithms.”

 

Software-defined car

TI, too, is keeping in check carmakers’ desire to enable over-the-air (OTA) updates — with a goal to make software-defined cars possible.

 

“OTA isn’t generally possible without giving architecture upgrades inside a car,” observed Moore. Given the criticality of secure connectivity necessary for software updates, “I don’t see car OEMs going for OTA without a gateway processor or with just a legacy dumb MCU,” he added.

 

To that end, Moore described TI’s DRA829V processor as offering carmakers “a huge step function in the beginning of their journey to secure OTA.”

 

TI noted that new gateway processors “manage higher volumes of data and support evolving requirements for autonomy and enhanced connectivity.”

TI claims that it is the first to integrate the PCIe and GbE/TSN into its gateway processor. (Source: TI)

 

TI also touted the DRA829V processor as “the first in the industry to incorporate a PCIe switch on-chip in addition to integrating an eight-port gigabit TSN-enabled Ethernet switch for faster high-performance computing functions and communications throughout the car.”

 

So how big a deal is it for TI to integrate the PCIe and GbE/TSN into its gateway processor DRA829V?

 

Demler said, “Looks like it has an eight-port switch, which is more than what’s offered by NXP’s recently announced S32G’s 2x switch.” But, he added, the DRA829V processors don’t exactly match up with NXP’s S32G, which was designed as a full-fledged network processor.

 

But on a higher level, both NXP and TI are addressing the same trends in automotive architecture, Magney summed up. “You have massive amounts of data to handle and you need the plumbing to support that.”

 

Availability

TI’s Moore noted that both TDA4VM and DRA829V samples have been already in the hands of a large number of customers since May.

 

According to TI, “Developers can get started immediately with Jacinto 7 processors development kits and buy the new TDA4VMXEVM and DRA829VXEVM evaluation modules on TI.com for $1,900 each.”

 

Pre-production TDA4VM and DRA8329V processors are available now, only from TI, at $97 in 1,000-unit quantities. Volume production is expected to be available in the second half of 2020.

Siemens-Arm Lets Car OEMs Envision 2025 Auto Architecture

By Junko Yoshida

LAS VEGAS — Bolstered by its new partnership with Arm, Siemens is ready to ask car OEMs some tough questions at the Consumer Electronics Show. Namely: 1) Do you already know what the architecture of your 2025 automotive platform looks like? 2) If so, have you verified your whole vehicle?

In an interview with EE Times, David Fritz, global technology manager, Autonomous and ADAS at Siemens, explained that if carmakers are still dithering with issues like which CPU, GPU, or MPU they should use in next-generation cars, “they’ve already missed the boat.”

Fritz described recent automotive history as making “a series of incremental changes to advanced driver-assistance systems (ADAS).” While carmakers might feel compelled to add hot new features that pop up in the market, too many have resorted to “one Band-Aid solution after another” for new models, he said, arguing that they’ve been doing so without really thinking about the architecture of 2025 vehicles. That’s one reason, he said, why car OEMs have failed to chart a single migration path from ADAS to autonomous vehicles (AVs).


Recommended
Car OEMs See No Easy ADAS-to-AV Path


Envision your vehicle platform of 2025
Siemens’s new partnership with Arm seeks to alter the ad hoc — and often siloed — design choices that automakers have been making for vehicle development.

What do carmakers want in their auto architecture of 2025? (Source: Siemens)

The partnership will combine pre-built and pre-verified Arm IPs with Siemens’s PAVE360 — billed as an all-encompassing validation and simulation system designed for autonomous vehicle hardware and software development. The goal is to enable automakers and suppliers to “envision their next-generation automotive platform,” said Fritz.

He explained that Siemens’s PAVE360 extends “digital twin simulation beyond processors to include automotive hardware and software subsystems, full vehicle models, fusion of sensor data, traffic flows, and even the safety of autonomous vehicles.”

If a carmaker chooses a certain processor for specific applications in, say, a 2020 vehicle, what could happen is that the new chip might not fit the enclosure size required in a 2025 car platform. Similarly, it might not meet the heat dissipation threshold demanded in the 2025 architecture. “You make one decision now, which turns out to make a bang-bang-bang domino effect in the rest of the whole 2025 car model,” said Fritz.

Put more bluntly, Fritz said that a carmaker’s bad decision six years ago could end up killing the entire new model for 2025.

It’s important for carmakers and Tier Ones to be able to “simulate and verify subsystem and system-on-chip designs and to understand how they perform within a vehicle design from the silicon level up, long before the vehicle is built,” he explained.

Siemens’s Fritz last year told EE Times, “I have actually seen a block diagram of an AV SoC internally designed by every major car OEM.” He stressed, “Tesla isn’t alone. Every carmaker wants to control its own destiny.” If so, with how many of those major car OEMs is Siemens already working today — to simulate and verify their whole car architecture of 2025?

Fritz said, “We’ve been working with a few,” without naming names. When asked to describe a 2025 vehicle architecture, he said, “While they all come from different directions, surprisingly, they appear to come to a similar platform.”

Why PAVE360?
The power of PAVE360 lies in its ability to simulate certain functions, SoCs, subsystems, or software in the context of an entire vehicle. Phil Magney, founder and principal advisor of VSI Labs, last year told EE Times that PAVE360 is “pretty unique.” He explained that the foundation of Siemens’s PAVE360 is a concept called the “digital twin.” Noting that a digital twin is a duplicate (simulated) version of the real world, Magney said, “For developers of vehicles or components, this literally means they can fully simulate their targets at any scale, whether it is a chip, software competent, ECU, or complete vehicle.”

In short, as Fritz claimed, the advantage of PAVE360’s methodology is its ability to correlate simulation with the physical platform.

Fritz pointed out a huge difference between simulating an SoC on a PC, for example, and simulating it in the context of a whole vehicle. “The two [approaches and their results] are so far away.”

What’s in it for Arm?
The advantage for Siemens working with Arm is clear. Given the ubiquity of Arm cores in a host of automotive chips, Siemens’s deepened relationship with Arm only adds fuel to PAVE360’s broader appeal.

But what’s in it for Arm?

First, associating Arm core designs with PAVE360 enhances the credibility of Arm as a provider of IP cores for the automotive market.

Mark Fitzgerald, associate director, Automotive Practice at Strategy Analytics, added, “Arm gains the ability for easier, faster design of custom chips for ADAS and autonomous applications.” He said that “automakers are moving to custom silicon (i.e., Tesla’s Full Self-Driving chips) rather than relying on off-the-shelf solutions, with some OEMs working on in-house solutions.” He noted, “The teaming up allows chip designers to use Arm architecture and IP along with Siemens’s PAVE360 to create custom ADAS and autonomous driving SoCs in a virtual environment.”

PAVE360 and Arm automotive IP: digital twin from concept to end of life (Source: Siemens)

Asked about how long Siemens has been working with Arm, Siemens’s Fritz told us “almost a year.” But the relationship between the two companies got a lot closer, as they have been engaged in more detailed, weekly engineering calls since last summer, when the legal arrangement between Siemens and Arm got sorted out. Under the agreement, Siemens today has access to all of Arm’s wide-ranging IP cores. Siemens is now in a position to discuss with Arm specific processing core designs such as Arm’s split lock logic or big.LITTLE architecture when placed against certain applications in the real world.

So what sort of changes might the industry expect from Arm in its future cores or processor architecture for the automotive market?

Strategy Analytics’ Fitzgerald told EE Times, “The likely trend would be to produce a single, very powerful chip for an ADAS or autonomous driving domain controller rather than distributed computing that is used today. OEMs will choose the best mix of centralized versus distributed computing based on application.”

What about other processor IP guys?
To be clear, Arm isn’t the only processor core IP supplier. Mobileye, now an Intel company, has been using MIPS cores for years. Ceva and Imagination might be also seeking to get designed into chips for digital cockpits or automotive perception chips, for example.

Asked if Siemens is planning to work with other IP suppliers, Fritz said, “The beauty of our system is that they can plug their cores in the cloud” tied to PAVE360 in order to evaluate their cores in the context of a whole vehicle.

Fitzgerald noted, “The biggest risk in missing out on the collaboration that PAVE360 offers is between OEMs, semi vendors, software providers, and Tier One suppliers.”

He said, “Chip vendors can gain by working with Arm IP and PAVE360 tools to quickly validate and verify chip design more efficiently and cost-effectively.” However, he cautioned, “Chip vendors can also be threatened if an OEM or even a large-tier supplier decides to use Arm IP and the PAVE360 tools to design chip solutions — taking the chip vendors out of the design/validation loop.”

Competitors to Siemens
Other vendors are also offering similar verification tools, according to Fitzgerald.

Cadence, for example, provides design tools across all the PCB, system-in-package (SiP), and SoC fabrics, which makes it possible to do coherent and integrated ECU design and analysis.

ANSYS, on the other hand, enables customers to do “multi-physics simulations to simultaneously solve power, thermal, variability, timing, electromagnetics, and reliability challenges across the spectrum of chip, package, and system to promote first-time silicon and system success.”

Last fall, Synopsys launched native automotive solutions “optimized for efficient design of autonomous driving and ADAS SoCs.”

Vector, meanwhile, claims to offer “comprehensive solutions” for developing ADAS systems in the form of software and hardware tools and embedded components. These include measuring instruments to acquire sensor data, checking and optimizing ECU functions, software components, and algorithm design.

Siemens’s Fritz, however, made it clear that, overall, nobody does the job as comprehensively as PAVE360.

NXP Launching Auto Network Processor

By Junko Yoshida

LAS VEGAS — NXP Semiconductors is coming to the Consumer Electronics Show to launch a new “Automotive Network Processor.”

NXP’s S32G is “a single-chip version” of two processors — an automotive microprocessor and an enterprise network processor — combined, said Ray Cornyn, vice president and general manager, Vehicle Dynamics Products. The S32G functions as a gateway processor for connected vehicles, as it offers enterprise-level networking capabilities. It also enables the latest data-intensive ADAS applications while providing vehicles with secure communication capabilities, he explained.

What NXP S32G entails (Source: NXP Semiconductors)

A closer look inside the S32G reveals a car OEM wish list for next-generation vehicles in 2021 and beyond.

Among the wishes are: over-the-air software updates — à la Tesla — to make vehicles “software upgradeable,” a shift to new domain-based vehicle architectures (i.e., consolidation of ECUs), beefed-up security features (including intrusion detection/monitoring), the vehicle’s ability to analyze data on the edge without constantly depending on the cloud, and upgraded safety to ASIL D.

In “connected vehicles,” car OEMs are looking for new business opportunities, including subscription models and usage-based insurance.

“It is a worldwide trend among car OEMs to bring all these new business opportunities and capabilities to next-generation vehicles,” said Brian Carlson, director, product line management for vehicle network processors at NXP.

If a software-upgradeable car is the automotive industry’s objective, the S32G seems designed to bring car OEMs a step closer.

Phil Magney, Founder and Principal at VSI Labs, observed that S32G “is designed to serve as the gateway to centralized domain processing, which is the supporting architecture of the software-defined car. Furthermore, new vehicle architectures must support tremendous volumes of data through multiple interfaces.”

He noted, “Up until this point, networking has been a bit of an afterthought. But in reality, it is quite critical since there is so much data moving around the vehicle. The S32G can handle all the plumbing and associated security, timing, and safety requirements.” He added that there are many network controllers designed by major chip suppliers and Tier Ones. But among existing network processors, “I have not seen anything that aggregates everything into one chip like the S32G.”

The new processor is already sampling, and car OEMs are currently testing S32G, said Carlson. To demonstrate the appeal of S32G among key automotive players, NXP, in its press release, shared a quote from Bernhard Augustin, Audi’s director of ECU Development Autonomous Driving: “We found the unique combination of networking, performance, and safety features of the S32G processor to be ideal for use in our next-generation ADAS domain controller.”

S32 family of processors
S32G is part of NXP’s S32 family of processors based on a unified architecture of high-performance MCUs, MPUs, application-specific acceleration, and interfaces.

The S32 family, designed to be scalable, allows developers to create software in a uniform environment across application platforms.

The goal is to let developers reuse their expensive R&D work, shortening time to market as the automotive industry copes with rapid changes in vehicle architectures over the next several years.

NXP noted that the platform maintains “automotive quality, reliability, and ASIL D performance across multiple application spaces throughout vehicles.”

Vehicle network processor
First and foremost, S32G provides an unprecedented level of networking and processing capabilities.

Shown in the block diagram below, the S32G processor incorporates lock-step Arm Cortex M7 microcontroller cores and an industry-first ability to lock-step clusters of Arm Cortex-A53 application cores.

As the amount of data collected and transported inside a vehicle grows exponentially, the processor’s ability to accelerate automotive networks and Ethernet packets becomes increasingly critical, Carlson explained.

It’s one thing to tout a networking processor’s ability to handle large data. But it’s a whole different story if the chip can actually accelerate data processing. Without acceleration, the vehicle network can easily bog down, said Carlson, making it impossible for the new vehicle to offer critical services with the deterministic network performance demanded by car OEMs.

S32G processors are designed to offload transport layers so that its communication engine can achieve low latency, he noted. S32G features “network acceleration blocks” designed for automotive and Ethernet networks.

Included in S32G network features are 20× CAN/CAN FD Interfaces, 4× Gigabit Ethernet Interfaces, and a PCI Express Gen 3 Interface.

As a comparison, Magney noted that Tesla “supports six CAN channels, four Ethernet channels, and eight serial lines for the cameras.” Calling Tesla “a proxy for future vehicle architectures,” Magney said, “Not surprisingly, NXP supplies Ethernet and CAN controllers to Tesla.”

Other key features integrated inside the S32G are security and safety.

The S32G, like all other S32 platform processors, embed high-performance hardware security acceleration, along with public key infrastructure (PKI) support for trusted key management, enabled by its Hardware Security Engine (HSE). The firewalled HSE is the root of trust supporting secure boot, providing system security services, and protecting against side-channel attacks.

As for safety, S32G processors offer full ASIL D capabilities, including lock-step Arm Cortex M7 microcontroller cores and an industry-first ability to lock-step clusters of Arm Cortex-A53 application cores, allowing new levels of safety performance with high-level operating systems and larger memory support.

Versatility of S32G
NXP’s Carlson made the point that the beauty of S32G lies in its versatility. The S32G can be used in many different places inside a vehicle — ranging from a gate processor to a domain controller and ADAS safety processors.

Where in a vehicle S32G can be used (Source: NXP)

VSI Labs’ Magney observed, “The S32G appears complementary to many of the AV or ADAS domain controllers because it consolidates a handful of chips into one.” He added, “Otherwise, the functionality of the S32G would be scattered with multiple transceivers and controllers to handle all the data traffic. The S32G also contains all the critical timing elements, memory, security, and network accelerators necessary to support all the data being passed around inside the vehicle.”

Ad block