Ad block

Weekly Briefing: Friday, January 24, 2020

BRIAN SANTO: I’m Brian Santo, EE Times Editor in Chief, and you’re listening to EE Times on Air. This is your Briefing for the week ending January 24th.

In this episode…

During its first few decades, AMD acted like Intel’s kid brother, tagging along and mimicking whatever Intel did. Recently, however, the company has established a reputation for independent innovation. This week we’ve got an interview with AMD CTO Mark Papermaster, one of the architects of the bold new AMD…

…also – a conversation with Ron Black, the CEO of Imagination Technologies, which seems to have its fingers in nearly every emerging technological trend out there…

…and our editorial director, Bolaji Ojo, checks in with the key question for the electronics industry in 2020: Where should everyone spend their money?

AMD was among the early wave of semiconductor companies born in late ‘60s and early ‘70s, many founded by former employees of Fairchild Semiconductor. Other companies from that era – National Semiconductor, LSI Logic, Mostek – are gone (as is Fairchild itself), but AMD has been remarkably resilient across the decades.

Over the years, AMD has mostly prospered, largely by offering itself as an alternative to Intel. But compared to Intel, AMD’s corporate strategies and management never appeared quite as steady. In 2006, AMD bought into the graphics processing segment of the market with the acquisition of ATI. In the summer of 2014, the company reorganized into two business groups: one was Computing, the other was Graphics. At roughly the same time, AMD hired Lisa Su as CEO. She encouraged the company to be far more ambitious. And just as AMD began delivering on those ambitions, Intel appeared to start stumbling.

At the Consumer Electronics Show a couple of weeks ago, Intel offered some details about previously announced next-generation products, but was vague about when they’ll all hit the market. In contrast, AMD introduced a slate of bold new products, most of them with specific launch dates. They included the Ryzen 4000 mobile CPU, the Threadripper 3990X high-end desktop chip (or HEDT) and the Radeon RX 5600 XT graphics card.

Mark Papermaster is AMD’s chief technology officer. He’s been helping to guide AMD’s technological roadmap for years. International correspondent Nitin Dahad caught up with Papermaster shortly after AMD’s presentation at CES. Nitin asked him about AMD’s new products.

MARK D. PAPERMASTER: It’s really an exciting time in the industry because what you’re seeing is a confluence of where we’re all being really in overload mode of the kind of the data that we’re being bombarded with every day. You know, you have IoT devices literally everywhere. It used to be that we had the big change of all being connected with cell phones, but now all the devices around us are smart, and they’re generating a ton of data. And in industrial applications as well. We put sensors on, and are creating really an inexorable demand for more and more high-performance computing. So that’s what we’re focused on in AMD.

That trend in the industry has driven us to put a roadmap together of a family of CPUs around our Zen, our family of GPUs, around our Radeon DNA architecture, and we’re rolling our extensions of our Navi product line. And it’s focused on really keeping AMD on or ahead of a Moore’s Law pace of doubling performance at a given cost point every 18 to 24 months, even while the traditional factors that helped Moore’s Law with new foundry technology nodes are changing. We don’t get all the same benefit we used to with those technology nodes.

So that’s why it’s so exciting for us. It’s driving innovation. It’s driving us to take new approaches like chiplets, putting different components in different technologies and assembling systems across a chiplet approach like we did on our new second-generation server like you saw, in fact, on the 64 core Threadripper that we just announced this week. That’s what excites the engineers at AMD. How to bring innovation to deliver enhanced experience and computing capabilities for our customers.

NITIN DAHAD: Your CEO, Lisa, talked about the gains in the processor, being both from the 7 nanometer process, but also design architecture improvements. Are you able to say anything about that?

MARK D. PAPERMASTER: Yeah. Going forward, it is so important to, obviously, reap the benefit of each new technology node. And that gives us a density at each new node. So you saw in our Ryzen 4000 we were able to pack in eight second-generation Zen cores, eight graphic compute units, and optimize it and bring it all the way down to a 15 watt, ultrathin laptop form factor. So it is about marrying the architectures, the CPU and GPU architectures, with the process technology. But equally, it’s about that whole system optimization. So we work closely with OEMs, delivering the right form factors. And then, of course, the software stack. They can deliver the end experience.

NITIN DAHAD: Is there room for new architectures do you think as you go forward?

MARK D. PAPERMASTER: Nitin, that’s a great question. With this just huge demand for more computing, what you’ll see is no abatement in the need for traditional CPUs and GPUs. You have all of the coding standards and the legacy of code that will demand more and more compute capability and emerging applications that can leverage that economy of scale of our X86 CPUs and of our GPUs. Yet we are seeing our workloads that can benefit from specialized architectures. And so what we’re doing at AMD is making sure that we can easily attach to startups and new architectures that are coming out there. For instance, to accelerate AI workloads that work in tandem with traditional computing architectures.

NITIN DAHAD: And when you say “new startups,” you’re taking about starts using the alternative architectures as well?

MARK D. PAPERMASTER: Sure. First of all, you have programmable devices like FPGAs that allow people to really experiment and try new approaches and new architectures. And then as those prove out, they’re often hardened into an application-specific IC. And so we would like to see a robust innovation ecosystem that can create these type of accelerator devices with our CPU and GPU products.

NITIN DAHAD: And does that mean like X86, ARM and RISC-V sitting together?

MARK D. PAPERMASTER: You have that today. You look at our security processor that we have across our chips. We partner with ARM. We leverage ARM trust zone. And we have many microcontrollers on our devices, so you actually already have a robust mixed architecture approach in devices today.

NITIN DAHAD: I’m going to ask you a question. Probably don’t expect an answer, but what can we see more of from AMD this year?

MARK D. PAPERMASTER: What you’re going to see is very exciting. We started a rollout last year of our 7 nanometer across our portfolio. We’re so excited this year to continue that. It starts with the Ryzen 4000, and extending our Navi product line. And you’ll see us really complete that portfolio top to bottom at the close of ’20. And we’re right on track on our next generation. So we’ve been very public on our Zen roadmap, and we’ve said that here we are shipping Zen 2 today, and our Zen 3 is right on track. And likewise on our graphics roadmap. We’re incredibly focused on our execution and getting each generation out to the market exactly when we promised it.

NITIN DAHAD: Mark, thank you very much.

MARK D. PAPERMASTER: Thank, Nitin.

BRIAN SANTO: AMD certainly impressed people with the products it announced this month. Some reviewers went hyperbolic, using terms like “monstrous,” “insane” and “stupid fast” to praise the new processors. Moving forward, EE Times will stay on top of how AMD’s competitors – especially Intel – try to respond.

Imagination Technologies got an immense boost years ago when it became a key supplier of electronic components for Apple’s iPhone. Imagination took a commensurate hit in 2017 when Apple said it was going to wean itself from Imagination’s products over the ensuing two years. Well, those two years have elapsed, and then a funny thing happened: Apple came back and renewed the relationship at the beginning of January. The press release lacked details, however.

Nitin Dahad caught up with Imagination’s CEO Ron Black, and asked about the resumption of the relationship and how it happened. This was Black’s response:

RON BLACK: We said everything we could say in the press release.

DAVID HAROLD: Yeah, Siri is always listening. You know that, right? We can’t add any color.

BRIAN SANTO: That second voice you just heard belongs to David Harold, Imagination’s VP of marketing. Since it was clear that Apple had closed that conversational avenue, Nitin shifted gears, asking Black about the rest of Imagination’s business. The response was an interesting overview of many of the other major technology trends are developing today.

RON BLACK: I’m particularly excited about the automotive industry and the advanced things that they’re doing. We have an outstanding position in automotive, and we’re looking to expand that. I think there’s a very interesting thing in those driving pixels, but also the autonomous driving and ADAS functionalities. So it’s just making the car safer. And that’s the type of thing that we really like to do. Technology that benefits humanity. Makes a difference in people’s lives. And the automotive industry is going that.

Now, we had a tough year last year. There was a lot of redundancies announced, but we’re quite confident it’s going to rebound back and it’s going to be a great opportunity for Imagination.

Besides automotive, we’re very interested, especially with the A Class GPUs, in looking towards the edge and data centers. The edge in a 5G context, we think that can change the computing paradigms, because you had high-bandwidth, low-latency everyplace. So some of the functionality that you think about doing today only in the data center or only in a mobile device, you can partition.

One of those is ray tracing. We have an outstanding position in ray tracing. We’ve been doing it for a decade or so. But we were early, we were too early. We actually had it on hold a little bit a few years ago, but we’ve been bringing it back over the last couple of years. We’ve opened it up for architectural licenses, so any GPU. We’re willing to license to anybody. I think it’s a great thing to take a license, because we have an incredible patent portfolio. That’s another area I’m investing in heavily, is patents. (Or “pay-tents,” as the British say.) So ray tracing is interesting.

It’s not just ray tracing for gaming. It’s ray tracing like in automotive. The automotive vendors are really interested in ray tracing, because as you render the cars on the screen, that’s their brand. That’s associated with how they look at things.

So those are kind of the major vectors. We have a smaller initiative today in RISC-V. We’re replacing all of our MIPS and meta cores, smaller microcontrollers that are in the GPUs. But I’m really interested in this space. Customers have been asking us to look at doing higher-end parts, so that’s an area we’re investigating. Watch that space.

NITIN DAHAD: I’ve seen a couple of different answers from you and a couple of other players, which I’ve written about last year behind the RISC-V ecosystems. So I’m guessing, if you’re going towards the edge and you need to do a lot of that stuff, the legacy stuff, is that too heavy in terms of the silicon state and things like that?

RON BLACK: I think RISC-V is modern architecture and the ecosystem is just gaining a lot of momentum over the last three to four years. I’ve been looking at it closely for five years. Five years ago, I thought it was a little soon. Today, I think there’s just so much momentum behind it. And customers are very interested in looking at different aspects of it. All the way up into very high-end application processors.

DAVID HAROLD: I think it’s safe to say that the firmware-type processors that were in our earlier GPUs were pretty simple. But actually now we have automotive-grade products, and the requirements in automotive in areas like self-test and some of the security are more advanced, and you need a bit more processing in there. RISC-V really sort of fits the bill for that.

RON BLACK: Yeah. One of the things, Nitin, that you’re probably going to see the industry talk about a lot more, and certainly us, is secure, heterogeneous computing. Secure all the way down from the device. It’s a hardware route of trust. All the way up to the network, to the cloud, to the applications. And binding applications to specific devices. So that’s a big area of focus for us. And heterogeneous in the sense of CPU, GPU and then various accelerators, many for AI applications. We have a very powerful neuronetwork accelerator that’s gaining a lot of traction.

And interestingly, first I thought it was going to be mainly a hardware play, but it’s the software. The software’s incredibly complex. The ecosystems are still developing. And when customers come to us, it’s usually because they love the fact that we have an API that transcends the GPUs and the neuronetwork accelerator. So you have an algorithm that’s evolving, you can run some of it on the GPU. If it’s not evolving, you can really focus in on the neuronetwork accelerator, get an absolutely dominant PPA.

NITIN DAHAD: I think what I’ve heard from you now is automotive is a big play for you. One doesn’t normally associate the traditional graphics pedigree that you’ve come from, and with automotive now it’s becoming clear.

DAVID HAROLD: You know what a phone and a car have as a similarity? It’s the heat. In a mobile phone, you don’t want something burning through your pocket. In the car, you can’t have hot spots. You can’t have risk of failure caused by heat, you can’t have a risk of fire caused by heat. So actually a car manufacturer is very interested in a lot of the same things that have made us so successful in mobile.

RON BLACK: I would say that most of the cars are moving away from the clusters with the normal dials, and they’re moving toward screens. And you look at some of the advanced cars. They have screens everyplace. Consoles, clusters, just the entire car. And so you are driving a lot of pixels in the cars. And then the functionality of course gets used for all the way up through autonomous driving-type functionality.

DAVID HAROLD: This isn’t a niche thing at all. We have the top three cars selling today use our technology.

NITIN DAHAD: I saw an amazing display yesterday. Three screens on a car. And pretty high-performance graphics. And that was being driven by one system, on module.

DAVID HAROLD: It is. So we have this thing called “hyperlane” in the A Series, which basically lets you drive multiple displays. It treats your GPU as if it’s multiple GPUs. Up to eight, in fact. Which is basically us saying to car manufacturers, “How many screens do you need?” And they go, “Four.” We’re like, “Okay, we’d better double that.” So we have that ability to drive multiple displays, and I think that’s going actually get more and more prevalent in a car. Some of the manufacturers we’re talking to think they’re going to put 400% more electronics into their cars over the next few years. It’s a massive growth opportunity.

NITIN DAHAD: I quite liked the hyperlane when I rode in it. It’s sort of like the parallel processing gone crazy.

DAVID HAROLD: The heart of that isn’t the ability to have multiple workloads. It’s the ability to guarantee quality of service. So in the car, there are certain things that always have to work. So your speedo. If that isn’t telling you the correct speed, you have a serious problem. So you need to guarantee that workload will always be fulfilled.

But then something else, maybe as a lower priority, and it can once in a while drop its frame rate, so whatever else the demands are different in the system. And our ability to guarantee and to partition that is really what hyperlaning is all about.

NITIN DAHAD: Automotive is a big thing, but connectivity is also a big thing. You touched on the opportunity in 5G with the edge processing. Maybe ray tracing at the edge. So tell me a little bit about the vision for connectivity this coming year.

RON BLACK: We changed the strategy when I joined the company. They had previously wanted to divest the Insigma product line. I didn’t think that was the right idea. There was a lot of premier accounts that are in interested in it. And connected microcontrollers are now all in vogue. And clearly we’re focused on a very specific set of applications. That’s mostly around power-efficient WiFi and Bluetooth low energy. We’re not trying to do kind of mobile device things. It’s really about the broader market and IoT connectivity. We’ve narrowed the focus of the team, and the customers are just responding incredibly well. There’s lot of innovation people looking at us. And that’s probably the biggest theme about Imagination. I’ve known the company for many, many years, but I didn’t really appreciate how innovative it is.

And there’s so much innovation. We just created IMG Labs. So it’s a labs function whose focus… Today, a lot of it is on ray tracing, extensions of ray tracing. But there’s a security thrust, there’s the heterogeneous computing thrust. We’re really looking to hire all of these big brained PhD types to compliment all of the staff that we have. So it’s about innovation. And in many ways, which is nice about it, it’s fun and socially responsible innovation. Because the product sets that we have… Think gaming. It’s fun. But then we can use the same GPUs to make the cars smarter so that they avoid collisions. So it’s socially responsible. That combination is truly unique.

NITIN DAHAD: In terms of business, is it still very much still third-third-third US-Europe-Asia? Or are you sort of like steering towards Asia now because of a lot of what’s happening?

RON BLACK: Asia’s actually a smaller part. It’s the US is still the largest, then Europe, then Asia. China, in general, is a very interesting area for everybody, just because of the investment that they’re putting into it. And when you’re building all these fabs and all sorts of semiconductor companies that are popping up, you need IP. And most of the time things like GPUs are just so hard. We have upwards of a thousand people working on these things. It just doesn’t make sense for every company to do. It makes sense for companies to license it.

So we see China as a very big area. We’re investing heavily in China to grow, both in an R&D standpoint and commercial customer support across all of the product lines.

NITIN DAHAD: I know ARM and SciFive, they’ve both set up different trading entities now in China. Have you done that?

RON BLACK: We’ve always just had a China headquarters there. So it’s not a separate entity. I don’t really think that that’s the appropriate way to set it up. In fact, several customers have told us they prefer it not to be that way. Because you don’t want to have two independent companies trying to accomplish the same thing. You really want to bring the full bulk of the company. We are very independent on management style. We push from decision making down. There’s more autonomous. We agree on the objectives, and then everybody executes ruthlessly in that direction. So there’s a lot of autonomy in the regions, but we really put the whole company behind it.

NITIN DAHAD: Okay, lastly. New decade. Everybody’s made lots of predictions. What’s your vision of the coming decade? And then maybe you can relate it back to Imagination if you like as well.

RON BLACK: I hate the kind of “vision” type of statements. I always found that CEOs that pontificate about vision or get their faces on magazines, you should probably short those companies, because they are spending too much time. Our focus is more on ruthless execution around where we have demonstrable value to customers. So it’s a lot of customer focus, listening to the customers, and then just ruthlessly executing. I think we’re proving that we’re doing that. So that’s really exciting.

I think clearly, as Dennard Scaling and Moore’s Law falls off, there’s going to be a change. Five nanometers A6 will cost half a billion dollars. There’s not a lot of companies can do that. So I think chiplets is the way to go. I’ve been working in chiplets in one way, form or another for six years, when I believed that it was the time to go. We have internal designs that we’re undertaking now in a chiplet context. So I think that’s going to be a major trend.

You know, the other part that I think is super important (I’m not ready to announce anything yet.), but you’re going to see us do a lot more in software. It’s a very natural thing when you thing about our customers are semiconductor companies, but a lot of times they ask them to work with them, say in the automotive space, to help the car manufacturers tailor their ECUs to leverage our GPUs. So that’s a very natural extension, is to offer tools and technical support to the customer’s customers. And that naturally evolves into software businesses.

NITIN DAHAD: Interesting. I’ve heard this quite a lot. Chiplets. I think that’s a very sort of… I just interviewed another CTO yesterday who was talking about the heterogeneous architectures that we’re going to see a lot more of, especially with the data center as well as the edge.

RON BLACK: Absolutely. We see that across the board.

NITIN DAHAD: Well, Ron, thank you very much.

BRIAN SANTO: You just heard Ron Black say that Imagination is going to replace all of its MIPS IP with RISC-V cores. What does that mean, exactly?

Recall that Imagination Technologies acquired MIPS in 2013. Imagination’s big ambition then was to become an IP power house – similar to Arm – having both graphics and CPU cores under one roof. But that dream never came true, and only four years after acquiring MIPS, Imagination turned around and sold it to Tallwood Venture Capital.

In 2018, Wave Computing bought MIPS from Tallwood, with a plan for MIPS to go open-source. Wave itself went under last fall.

Black was referring to “MIPS cores used as firmware processors in PowerVR GPU designs.” So Imagination will be replacing those with RISC-V cores developed by Imagination in house.”

Earlier in the podcast, we mentioned a handful of the once-mighty semiconductor companies that disappeared over the years. Black’s comment to Nitin means that another famous name in semiconductors – MIPS Technologies – is about to join those others on the wayside.

Earlier this week, our colleague Bolaji Ojo was named publisher and editor-in-chief of Aspencore. Aspencore is the global publishing company that owns EE Times. Bola is a veteran journalist who has covered the electronics industry for so long he won’t tell us precisely how long it’s been. Every once in a while, he comes and visits us here at our luxurious studios at EE Times Central, and every time he visits, he’s got some big ideas to unload. Here we are with International editor Junko Yoshida.

JUNKO YOSHIDA: Bola, what’s on your mind?

BOLAJI OJO: Well today, Junko, first I’d like to kind of celebrate Aspencore. I think probably that’s the first thing on my mind. You were at CES with Brian and with David Benjamin and Nitin Dahad and a whole host of teams from Aspencore. We were probably one of the largest technology media teams at that event. To kudos to the team! You guys really rock! You did a great and a fantastic job out there.

Now: In the spirit of “What have you done for me today?”, I can tell you that what’s on mind next as your co-conspirator Editor-in-Chief, is where I’m sending my reporters next. So, Junko, you are off to Barcelona to attend the Mobile World Congress. Brian is going to be holding down the fort back home in the West Coast of the US. Nitin Dahad is going to be at the Embedded World Congress with Sally Ward-Foxton. Mauricio Depaulo Emilio, our famous Italian engineer, is also going to be at the Embedded World event in Germany. So we are fanning out across the globe. And the reason we’re fanning out across the globe is that we need to know what’s happening, we need to represent our readers and we need to tell them all about the new things that are going on the world of technology.

Now, in order for us to do that, though, we should go back in time. All right? Brian, I’ve got a question for you. How many decades now can you count that semiconductors, if you can just kind of approximate the timing when the semiconductor industry could be said to have caramelized. How many decades now?

BRIAN SANTO: Now, my initial answer was six decades, going back to the early 70s with AMD and Intel. But you’re already told me that I’m wrong about that.

BOLAJI OJO: And the person who corrected me: Malcolm Penn, the CEO of Future Horizons, which is a research firm based in the UK. So I was in London last week attending that event, and Malcolm, he’s not in his 80s yet, but he has seen several decades of this industry, and he’s somebody worth listening to.

So one of the questions that he asked at that event had to do with what is driving the semiconductor industry today. So that’s part of what’s on my mind. And rather than I give you his answer, I’d like to ask Junko. Junko, what do you think is driving the semiconductor market today?

JUNKO YOSHIDA: You know, I think it’s new devices and new applications that are enabled by new devices. Brian said five, six decades ago. We didn’t even have a smartphone. Think about that. All the electronics that go into that little device, we didn’t even think that PC was going to be that big 60 years ago, right? So I think those are the new generation of devices and the applications that those devices enable.

BOLAJI OJO: Apps. Apps, apps, apps. Apps are the ones driving the semiconductor market today. You nailed it, Junko! Of course I expected you to do that. You nailed it! And the reason why it’s different today: This is not your old market from the 1970s or the 80s driven by the PCs or smartphones. It’s the apps. And the reason why… Malcolm’s conclusion was that the semiconductor market is more and more driven by the global economy nowadays. And the reason for that are the apps. Those apps are the ones that have now cut across all segments of the economy.

On your app, you can book a flight. You can buy insurance. You can buy a car. You can have Amazon ship you anything anywhere that you want. There’s a lot more that we do on these apps. They’ve basically cut a line right through the economy.

So now the question for me: When you said, “What’s on your mind?”, data. That’s what’s on my mind. Because they are generated by all of these apps. And the problem… There are humongous problems arising from the usage of data. One of the things that Malcolm said, Malcolm Penn said at this event, he said, and I thought that was kind of interesting, and it kind of sets the tone for what we’re going have to deal with as an industry in the future. He said, and I’m quoting, “It is unrealistic to expect data to be perfect. If data is not perfect, we’ve got a problem on our hands.”

So what is the reasoning? Why is data not perfect? And what happens when data isn’t perfect? Are we being realistic in looking for perfect data? Junko, you write about automotive all the time.

JUNKO YOSHIDA: What do you mean by data not being perfect? Do you mean data not being perfectly captured? Or data has a lot of noise? What do you mean by that?

BOLAJI OJO: All of that. It’s not perfectly captured. So even the sensors. You write about autonomous vehicles all the time. The sensors. What kind of data are they getting? And has it been properly interpreted? Who’s processing it? How much of it can any device process at any point in time? How do you use it? And how do you use it across the economy? How do you use it at home? How do you use it in engineering? How do you use it in banking? How do you use it in finance? How do you use it at all? If it is not perfect? If you cannot be sure that the data you’ve collected can be relied up, do we know if it can be relied upon?

BRIAN SANTO: It’s the old garbage in/garbage out conundrum. That’s been around for a long, long time. And just by virtue of the fact that we’re relying on even more data and collecting more data, the danger of having garbage in or garbage out, either way, I imagine that compounds the problem, right?

JUNKO YOSHIDA: Yeah.

BOLAJI OJO: But we also depend on the data. So here’s the problem: So let’s suppose that from the collection point, the data is clean. So you put it in storage. And then somebody decides to go in and tamper with the data. That’s a bigger problem. Right, Junko?

JUNKO YOSHIDA: Yeah. Exactly. I think you bring an interesting point here. One is that, How do we protect data? But also, How do we, I think you mentioned, interpret? But annotation data, it’s been known in the world of AI, it’s very difficult. Right? You annotate data, somebody has to look at each data perceived by sensors and say, Oh, this is a man; this is relevant; this is not relevant. Some human has to go in to make that decision. And if that decision is not perfect, again, data is not perfect.

But one of the things that I was talking about with Brian is that when Brian and I were at CES, I was totally amazed about what a big presence Amazon Web Service had. AWS was everywhere! Amazon had it own huge booth, but AWS also had its huge booth in the automotive segment that was in the North Hall. Right? It’s right in the middle. I mean, it goes to show what I was saying: Data has become the core of the business, and whoever hosts that data on the web has enormous power now.

BOLAJI OJO: Everything that we do, everything that engineers do now, of course hardware remains important. But I believe that the core of today’s economy is, How much data can the hardware capture? And how will it be used?

So just leaving the issue of data aside for now, this is going to be a beautiful year as far as semiconductor growth is concerned. Unit shipments are going to be up. Revenue is also projected to go up. Now, there are folks that are saying 9%. Malcolm Penn is saying 10% minimum. And he’s saying it could be as high as 15 to 20 percent. Wow! That’s a good year ahead. Where should your money be if you are in the semiconductor market? Are you supposed to focus on memory? By the way, average pricing, going up. Memory is back. And when memory is back, semiconductors will get a good year. I’m certain of that.

But where is memory going? Going back into data. This is our life today.

BRIAN SANTO: Well, I think that’s where you put your money. You’ve got to spread your money. Because you’ve got to put the technology where it’s being used, and it’s being used literally everywhere. A lot of it is the smartphones, but you still need data collection points, you need data transmission points, you need all sorts of infrastructure to make sure that the data that’s running the economy, or at least there’s the underpinning of the economy, is able to collected and moved and processed. So you’ve got a lot of different points all along that chain. So when you say “memory explodes,” wonderful. But I think RF front ends and processing, the whole shebang should probably do well. It’s on an upward trajectory. If it’s this year, halleluiah. But it’s probably good for the next few years, too, I would imagine.

BOLAJI OJO: Personally, my money’s on power and energy. Because that’s the connecting wire for everything that you’ve talked about. Now, if I’m going to put my money on power, you know where I’m going to put it? In developing economies. They don’t have enough of it. It’s like the old story: Man lands in a place where they don’t wear shoes, and he’s like, “Please!” He calls headquarters and says, “Send me all the shoes you can! We need shoes here!” In developing economies, that’s where power is needed the most.

In the West, they already know how to generate power. So what is the argument? What’s the discussion? What’s the controversy in the West? In Germany, where I am today, the controversy is what type of power.

Okay, I’m going to bring my daughter in for a quick story. We were in Africa in December, and somebody said to her, “Well, everybody in Africa wants to move to the West. They want to move to Europe and America because they think the streets are paved with gold.” And this kid said to me, “Why would they want to move?” If you’re in a place where the streets are not even paved at all. You just want a paved road. Forget about it being paved with gold. You just want paved roads. Let’s start there.

It’s the same thing. If I have money to invest in power– which is going to drive data, which is the reason why we’re developing so much more of silicon and hardware in electronics– I would want to put that money into developing economies. Because that’s where they need that power to run the next set of electronics in this industry. That’s what’s on my mind.

It’s a happy mind this week, by the way, Junko. Nothing to complain about. We’re looking at some good growth ahead. I’ve got my editors spread all over the world. We’ve got a great story at Aspencore. Watch this space, Junko. Watch this space.

BRIAN SANTO: And now the moment you’ve all been waiting for: our weekly celebration of the anniversaries of great moments in electronics history.

Here’s the first time we’ve had a pair of related anniversaries. On January 22nd in 1984, a little computer company with a frivolous name bought a commercial spot during the Super Bowl. In the ad, Apple announced it would introduce its Macintosh personal computer in two days. The voiceover said the introduction would ensure that the year 1984 would NOT be like the novel “1984.” The bleak images of resigned conformity to authoritarian control were instantly interpreted the way Apple wanted them to be: as a reference to the leading personal computer maker, IBM. Apple immediately established the anti-authoritarian cred it has maintained to this day. The ad went a very long way to help the company win the loyalty of people who work in the arts, a small but well-defined community that Apple deliberately catered to.

The ad was immediately recognized for what it would turn out to be: one of the most famous ads in history. It aired on national TV just once. Now, a 30-second spot during that Super Bowl sold for $368,000, according to one source. A 30-second ad in this year’s Super Bowl, by the way, will go for $5 million.

Anybody remember who played in that Super Bowl? I’m giving you a few seconds here. Yeah, I see only a few of you in the studio audience are raising your hands. So who was it? Yeah, that’s right. The Oakland Raiders and the defending Super Bowl champions, the Washington Redskins. The Raiders won.

The director of the ad was a fellow named Ridley Scott. You might have heard of him; he’s still making successful movies.

The paired anniversary, with the anniversary of that ad? The actual introduction of the Macintosh computer two days later, on January 24th.

(BEEP SOUND EFFECT)

That was the amazingly brief startup ping from the first Mac, which came in a beige cabinet that measured roughly 14 by 10 by 11 inches. It was built around a 7.3 megahertz Motorola 68000, and it came with a 128K RAM. It wasn’t the first PC built to showcase a graphical user interface, but it was the first successful one. Or… relatively successful. It was priced at about $2,500, which made it far more expensive than many other machines – five times more expensive than a Radio Shack TRS model. It also didn’t have a lot of software.

Subsequently, Apple tried more unconventional marketing campaigns, but it kept fumbling. Its mis-steps included an ill-advised follow-up to the “1984” commercial, referred to as the “Lemmings” ad, which most potential customers found insulting. It sold well into the education and publishing markets, however, and though those were not the biggest markets for PCs to target, in terms of paving the way for the ongoing “cult of Mac,” the Macintosh was an unqualified success.

That’s your Weekly Briefing for the week ending January 24th. This podcast is Produced by Aspencore Studio. It was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss.

The transcript of this podcast can be found on EETimes.com. You can find a new episode every Friday on our web site, or through any of the most popular places for podcasts. I’m Brian Santo. We’ll see you next week.

Weekly Briefing: Friday, January 17, 2020

BRIAN SANTO: I’m Brian Santo, EE Times Editor in Chief, and you’re listening to EE Times on Air. This is your Briefing for the week ending January 17th.

In this episode…

A company called Prophesee has developed a completely new way to capture video with what it calls an event-based sensor. At the recent CES show, we caught up with Prophesee’s CEO, Luca Verre. Today you’ll hear our interview with him.

Also, the Consumer Electronics Show. It’s vast. CES 2020 was last week. EE Times editors saw more products and technologies, and sat in on more sessions, than we had time to write about. We got together to discuss some of the most fascinating things we saw at the show, including the Prophesee event-based sensor, autonomous boats, data privacy chips, quantum computers, smart toilets, automated cocktail shakers, farm equipment, AI-powered toothbrushes… and more!

Prophesee is a startup based in France that is one of the companies pursuing an interesting new twist on video cameras. It’s unconventional enough to merit a brief explanation before we move on. The approach is called event-driven sensing, and it is fundamentally different from the way moving images have been captured since the beginning of motion pictures and television.

For more than a century, film and video cameras have captured a series of still images, one after another, at brief intervals of time. Displaying the video depends on flickering through each still frame, one after another; once you get past a certain minimum frame rate, human eyes perceive the progression of images as uninterrupted motion.

The basic idea behind event-driven sensing is to capture and record only what has changed in the scene in front of the image sensor from one moment to the next. EE Times has been interested in this company’s technology because event-driven cameras could offer one of the answers, if not all, to the fundamental problems this industry is facing: how to handle big data inside advanced vehicles or any other connected, embedded devices.

So – Prophesee was at last week’s Consumer Electronics Show, and international editor Junko Yoshida caught up with the company’s CEO, Luca Verre.

JUNKO YOSHIDA: You’re showing some interesting demonstrations here, so I just want to get the lowdown from here. One of the first things that I saw when I came in… What were you showing? That was an HD version of your event-based sensor. So tell me a little bit about that.

LUCA VERRE: We have a new sensor generation, which is an HD sensor, so one million pixels, 720p. This is the result of joint cooperation we have done with Sony, which will be published at ISEC in February in San Francisco.

JUNKO YOSHIDA: I see. So right now what you have as a commercial product from Prophesee, that is VGA-based.

LUCA VERRE: Yes. The commercial product we have is a VGA sensor. It’s in mass production. We are currently deploying shipping for industrial applications.

JUNKO YOSHIDA: This is sort of an interesting development here. ISEC is essentially taking your paper. So we’re not talking about commercial plans, I understand. But obviously you guys had worked together. It’s a joint development. It has been going on some time to produce this paper, right?

LUCA VERRE: Yes indeed. There has been some research work done together with Sony. Yes, Sony is indeed interested in event-based technology, but unfortunately I cannot tell you more than that.

JUNKO YOSHIDA: In terms of actual applications, they are keeping mum. They’re not telling us. All right. Or you can’t tell us. All right. So why do you think that HD is more important than VGA? I mean, in what sort of instances do people want HD?

LUCA VERRE: HD is important because, of course, increasing resolution enables us to open more doors to applications to look for simple farther, to farther distance for automotive applications, for example, as well as for some industrial IoT applications. Also, one of the main challenges we have been solving, moving from the VGA sensor to the HD sensor is the capability now to stack the sensor, to use a very advanced technology node that enables us to reduce the pixel pitch. So to make actually the sensor much smaller and cost-effective.

JUNKO YOSHIDA: So can I say automotive is one of your key markets that you’re gunning for?

LUCA VERRE: Yes indeed. Automotive remains one of the key verticals we are targeting, because our technology, event-based technology, shows clear benefit in that space with respect to low latency detection, low data rate and high dynamic range.

JUNKO YOSHIDA: And what are you hearing from OEMs about what problems they really want to solve when they decide to work with you?

LUCA VERRE: When both OEMs and Tier Ones work with Prophesee and event-based technology, the main pain point they’re trying to solve is to reduce the amount of data generated in both Level Two, Level Three or Level Four or Five type of applications. Because in both cases they’re looking for redundant solutions or multi-model systems that generate a huge amount of data. So by adding an event-based sensor, they see the benefit of reducing this bandwidth by a factor of 20 or 30.

JUNKO YOSHIDA: Yeah. You said something interesting. You say that the car companies can probably continue to use frame-based cameras, but your event-based camera can be used as sort of like an early-warning signal?

LUCA VERRE: Yes. Because our sensor is always on, there’s not effectively a frame rate. And it’s capable, with low latency, to detect relevant regions of interest. Then we can use this information to steer the attention of all the technologies, maybe lidar or radar.

JUNKO YOSHIDA: Right. So the object of interest can be spotted much faster. Okay. So the one thing that I took away from one of the demonstrations you showed me was that comparison the use of frame-based camera versus event-based cameras in terms of AEB, automatic emergency brake. And I think you mentioned that… What was the… American Automotive…

LUCA VERRE: American Automotive Association. They published some numbers showing that, in the case of AEB, in 69% of the cases, the current systems fail when it comes to detecting an adult, when it comes to detecting actually a child is 89% of the time it doesn’t fail, and most of the time it actually fails. So it’s still a great challenge despite the fact that actually AEB will become mandatory in a few years from now. So we did some tests in some controlled environments with one of the largest OEMs in Europe, and we compared side by side the frame-based sensor with an event-based sensor, showing that, while the frame-based camera system was failing evening in fusion with a radar system, our system was actually capable to detect pedestrians in both daylight conditions and night light conditions.

JUNKO YOSHIDA: Nice. What’s the cost comparison between event-based cameras versus frame-based cameras?

LUCA VERRE: The sensor itself is very similar, because it’s a CMOS process, so the cost of silicon is sold in volume. It’s the same price as a conventional image sensor. The benefit that you actually bring is more the processing level, because the reduction of the amount of data comes with the benefit of lower processing power.

JUNKO YOSHIDA: Right. So they have the impact for the rest of the system. Right. Very good.

BRIAN SANTO: Prophesee is one hot little startup. Back in October, the company raised another $28 million in funding, bringing its total to roughly $68 million. Among Prophesee’s strategic partners are Renault-Nissan and Huawei.

International correspondent Nitin Dahad, roving reporter David Benjamin, Junko, and I were all at the recent Consumer Electronics Show, roaming the halls, ferreting out the most interesting new technologies and stories. We all covered a lot of ground, and we saw a lot of things we didn’t get to write about for one reason or another. So Nitin, Junko, and I got together on a conference call to go over some of it.

So, Nitin, tell us what the experience of CES 2020 was like.

NITIN DAHAD: Wow. Okay, so it was my first CES after ten years, and boy was it different. It was big. And when I entered the hotel, I thought, where is the hotel lobby? This is just a casino.

BRIAN SANTO: Yeah. That’s Las Vegas, isn’t it? There’s always a casino in the lobby. So, Junko, you’re a veteran of CES. What was your CES 2020 like?

JUNKO YOSHIDA: I wouldn’t want to divulge how many years I’ve been to CES! Many, many years. Decades, actually. This CES was interesting. Ever year, we see a whole bunch of new stuff, but I think this year it sort of… the crowd was just a big as ever, but I think I picked up a few trends that are going to be important for the next ten years.

NITIN DAHAD: I would also to like to just say that I went straight in, obviously, on the Sunday to the Tech Trends and then the CES Unveiled. I have to say, there’s a whole display of what is possible and what is not possible I guess at CES. It was actually hard work and fun to look at all this. Because being sort of in technology for so many years, it’s really nice to see some of those end products.

BRIAN SANTO: I agree, but it is a grueling show. Because it’s just so immense. There are three huge halls in the Las Vegas Convention Center. There are usually three or four hotels, all filled with stuff. It’s a lot to take in. That said, it’s a lot of fun to see what the industry has come up within a year. Junko, tell us what you saw.

JUNKO YOSHIDA: Okay. I would like to focus on two things, actually. One is, you may not think this is cool, but one I think is the protection of privacy. Another thing is big data.

Let’s start with the protection of privacy, because I think everybody wants to talk about big data. And everybody thinks big data is a given. But nobody is really doing anything to protect privacy except for EU coming up with GDPR, right? So how do you make sure that your products are not inadvertently violating GDPR? Or, if you’re consumers, how do you protect yourself so that all that you’re saying into Alexa or all that you’re saying to Siri or all you’re interactions with smartphones and wearables. Your private data are not going straight to the cloud and being shopped around by the people who manage data. Right? So it was interesting that I came across a company called Declok. And the technology is a piece of hardware, actually, to de-identify yourself. This is the kind of technology that Silicon Valley startups would never think of doing it, because they think their business is collecting data. But this company, a small company in Taiwan, they think that this is going to be big in the era of GDPR. Everybody wants to protect their privacy. So why don’t we put this little chip in a dongle, stick it into the smartphone or, over the long term, this chip can go inside any embedded systems so that the technology will let the data aggregators to see the forest but not the tree.

BRIAN SANTO: And what does it do? Does it anomomize the data somehow before it gets sent?

JUNKO YOSHIDA: Yeah. They use a random number generator so that they can give the trend but they hide all the private information. Apparently someone from DARPA was interested in it and came by. And I said, well, I don’t know if it’s a good thing or a bad thing. They might want to reverse engineer this thing. Because if you’re in law enforcement, it’s also a kind of nightmare, right?

BRIAN SANTO: Okay, true. Yet in the US, we take it as a given, we assume we have certain privacy rights and protections. And those privacy protections and rights aren’t necessarily codified into law in other countries around the world. So this is certainly an issue around the world. But even in the US, it’s a bigger concern than most people might realize.

JUNKO YOSHIDA: Yeah. Big data. This also connects with big data, because as I walked around and talked to different people, I realized that everybody, without saying so much, but everybody’s struggling with big data, right? I cover automotive, and automotive uses a lot of different sensors, including computer vision, lidars and radars, and especially lidars. Oh my God! That point class that it generates, the amount of data is so huge. So the question is, if you have to deal with so much big data within the embedded system, how do you deal with it? You can put the big CPU or GPU inside the machine to process the data, or you would say that, will you add AI engines so that it filters out. But the truth is, you really need to figure a way to extract the information that it needs before it goes to, say, video compression or sensor fusion.

And there’s a little company in Berlin called Teraki. They’re showing this technology. That was kind of interesting. Because Teraki’s piece of software can run on safety microcontrollers like Infineon’s Orex or NXP’s Bluebox. I thought that was pretty cool.

BRIAN SANTO: That is cool. So it’s doing some preprocessing. It’s pulling information out of the data before sending it along?

JUNKO YOSHIDA: Exactly. So it’s a piece of software that works at the very edge, but it has the ability to filter out all the noise and information that it needs for later AI training or sensor fusion or whatnot.

BRIAN SANTO: Fascinating. So, Nitin, you were exploring all over CES, and you found a boat?

NITIN DAHAD: I certainly did. That’s one of my themes, autonomous is not just for cars, as we’ve been talking about over the last few years. There’s a company called Brunswick, which pressed me quite hard to go to their press conference, I guess to talk about the first-ever boat unveiling at CES. When you look at it, actually, it’s an electric boat– luxury, 40-foot electric boat, with obviously backup motors and everything. But it can run for eight hours with all the stuff that you use on there as well. And it’s got all the usual stuff that you’d see in the cars: the lidar, the sensors, the communication systems. So what I think the CEO of Brunswick was trying to impress on people at their briefing was, it’s really almost the same as your cars, and that’s why we’re at CES. We want to show off our boat.

Obviously there was other stuff as well in terms of autonomous. Autonomous features: there was an e-bike which had blind spot assist, which may not sound new, but they’ve already got a production model. It’s an Indian startup, and I think they’re crowd funding at the moment, if I’m right. Basically when a truck or a car or even a cow comes up beside, I guess this thing buzzes on your handlebar to say there’s something on the side.

And then there was the autonomous cannabis grower. Actually was they had in the booth was growing cannabis, but it’s actually any herbs or any plants. And it’s using a raspberry pie and Arduino camera module, some environmental sensors, and they’ve got some training modules. And they said it takes, in this cabinet, three months to grow cannabis and probably anything else. I don’t know how long it takes to grow cannabis, but it is quite interesting in the way they’re using the basic computing, the sensors and the AI. And that’s kind of a theme that I was seeing everywhere, actually. You use very simple stuff. And this company, by the way, was called Altifarm. Again, it’s a startup from India. I just happened on it because I was at Eureka Park, happened to go into…

BRIAN SANTO: Yeah, just happened.

NITIN DAHAD: And then the other thing I suppose was, is this really the best use of tech? When I was walking past the Amazon booth or space, whatever you call it, they had a Lamborghini, which they said was the first car with Alexa control built in. Do you really need it? I’m not sure.

JUNKO YOSHIDA: Well, there’s a lot of chip of companies like Qualcomm, they actually demoed the Alexa in a car last year. So I think it’s going to spread. It’s a hand-free control, right? It’s going to make sense.

BRIAN SANTO: Yeah. You’re in your Lamborghini tooling around at a hundred kilometers per hour in Monaco. That’s when you need to ask Alexa to find you a restaurant that serves the best Portuguese backla. Because you don’t want to take your eyes off the road because you’re screaming around corners in your sleek performance machine, right?

JUNKO YOSHIDA: Right.

NITIN DAHAD: The reason I sort of picked it up is because I had also interviewed a Formula One driver from McLaren, Lando Norris, and you’ll see the video online. But one of the things he said is, he wants that control to be able to sort of steer the car, move it around, whatever. So I’m not sure where the Alexa features come in. But I’m sure there’s got to be some real uses for it. It’s just that it seemed quite extravagant to have Alexa in a Lamborghini.

BRIAN SANTO: Well, speaking of extravagance and Alexa, didn’t one of the two of you see a bed equipped with Alexa? With a smart bed?

NITIN DAHAD: Yes.

JUNKO YOSHIDA: That was Nitin. Yes. Right?

NITIN DAHAD: So again, in the same space where they had the Lamborghini, Amazon was also showing smart beds.

BRIAN SANTO: Why?!?

NITIN DAHAD: You know what? I didn’t dig into it. Why would you want somebody snooping in whatever you’re doing in the bedroom? I’m not sure.

BRIAN SANTO: And supposedly there are sensors in the bed? They’ll record your sleep metrics like how much you toss and turn? That sort of thing?

NITIN DAHAD: Yes. Actually, when I walked through one of the booths, in The Sands, I think it was, and they had the whole smart home section, which actually was quite big. And I walked past a smart bed, and out of curiosity I just said to the person demoing it, I said, Okay, so if I lie on there now, will it measure my vital signs? And she said, No. It’ll do the processing and then it might give it to you in the morning. So I wasn’t sure what the use of that was. And that was my point of saying, Is this really the best use of tech? Or what’s the point of it? But I guess it’s got to evolve.

BRIAN SANTO: How many things that don’t require electricity now are we going to end up plugging in?

JUNKO YOSHIDA: I know.

BRIAN SANTO: So after the past few years, we’ve just seen company after company introduce things at CES for improving sleep or improving the quality of sleep. And they all cite statistics that sleep deficits are very common. And they’ve introduced headbands and helmets and nose plugs and all sorts of other devices aimed at helping you sleep better. Are you guys having that much trouble sleeping?

NITIN DAHAD: I do. But I don’t think I’ll be plugging into something to say, Put me to sleep. Although, actually going on the plane, I managed to find a piece of music that did actually help me sleep.

BRIAN SANTO: So maybe we’ve got an assignment for you next month, Nitin.

So we’ve all seen espresso makers, the ones that work with the pods. And those made sense, because if you want to make an espresso, you need a special machine anyway. And it’s a big process. But this year, one of those companies introduced a cocktail mixer.

JUNKO YOSHIDA: Ooooo! Really?

BRIAN SANTO: And as much as I was happy to get a cocktail– because I really needed a Moscow Mule at 4PM on the floor of CES that day. How hard is it to pour a cocktail that you need another machine to do it for you?

NITIN DAHAD: I actually walked past that and I avoided it.

BRIAN SANTO: The other thing I walked past and avoided completely were the smart toilets. I mean, I don’t want to plug in my toilet. And frankly, I don’t want it to be that intelligent either!

NITIN DAHAD: Well, in Japan, when I went to Tokyo a few years ago, I did see some what you would call “smart toilets” maybe.

JUNKO YOSHIDA: It’s the issue of cleanliness. And the Japanese are meticulous about how clean it must be. So it’s not just about toilets, but how you clean yourself. Everything to do with personal hygiene. The Japanese are really meticulous about that.

NITIN DAHAD: I didn’t see the one in CES, but I’m guessing if they’re smart toilets, they’ll probably analyze your excretions to see sort of how well you’re doing.

BRIAN SANTO: I saw something like that last year. It had a camera, and it was AI-based, and it looked inside baby diapers and analyzed what had come out so you can presumably make better decisions about what goes in.`

JUNKO YOSHIDA: I think what it really comes down to is, just because you can, doesn’t mean it needs to have that technology.

BRIAN SANTO: Exactly.

JUNKO YOSHIDA: Yeah. That’s really too prevalent in CES. It’s just a little too much.

BRIAN SANTO: Yeah, I don’t know that I need to plug in a toothbrush just so that it can give me teeth brushing metrics.

NITIN DAHAD: I did actually try that. I went to the Proctor & Gamble booth and actually tried it. And for me, it was kind of a revelation, because the week before I actually did go to the dentist, and he told me off for not brushing properly. So what this actually did was actually allowed me to figure out where I wasn’t brushing right. So that map on the phone where I was and wasn’t. Because it’s using accelerometers and position sensors to actually determine where in the mouth it is relative to your start point. So then that’s how it measures where it’s going. Would I spend $200+ on that just to tell me if I was doing it right? Or should I go to the dentist and get him to tell me off? I don’t know.

BRIAN SANTO: Well, yeah, you’ve convinced me.

JUNKO YOSHIDA: Sold!

BRIAN SANTO: For the right price, maybe an electric toothbrush is worth it. So any other observations from your peregrinations around CES?

JUNKO YOSHIDA: You know, Brian, you should talk about what you found, actually, because it sounds essentially there’s a whole bunch of startups doing some interesting stuff. But a lot of times, do we need these technologies all crammed into The Sands? No offense to anybody. They are creative, but at the same time, it’s too much navel gazing going on. Too much information in my opinion.

But there were all sorts of surprisingly non-consumerish technologies on display at CES, right, Brian?

BRIAN SANTO: Oh, yeah. I was amazed by this. Really great to see, but there’s no way that the IBM Quantum computer can in any way, shape or form be considered consumer electronics. That said, for whatever reason it was at CES, it was cool to see IBM’s Q Quantum computer in the grand lobby there.

JUNKO YOSHIDA: That central hall.

BRIAN SANTO: It looks like something out of a sci-fi movie. I mean, it’s in this glass-enclosed case…

JUNKO YOSHIDA: Beautiful.

BRIAN SANTO: It’s beautiful! It’s cryogenically cooled; it’s vacuum sealed; it’s this gold, gleaming, tiered mechanism with pipes and connections, and they’re all symmetrically shaped and placed. Yeah, it’s just this wicked cool looking device. What IBM’s Quantum computer, Q, does, there are different ways to do quantum computing, but IBM’s Q takes a particular approach where they look at electron spin. The electron is the quanta, right? And the way it was explained to me is, you think of the electron as a globe, and you assign values to points on the globe. So it’s spinning on its axis. And the top axis, you assign that as a value of one, and the bottom axis, you assign the value of zero. And every other point around the globe represents some other value. And you’ve got like this almost infinite number of values between one and zero. So you physically add these electrons into the queue. And each electron is considered a q-bit, a quantum bit, a q-bit. And you cool them down progressively from physical layer to physical layer, from top to bottom. So the top layer in the machine is cooled to maybe a few Kelvin. And as you drop from tier to tier to tier, you’re cooling it more and more, to mili-Kelvins. And at the end, what happens is that the electrons are active and they get less active as they get cooled down. And they physically collapse as they move down through these tiers. And they collapse toward specific values. And IBM and the people, researchers, that they’ve been working with, they’ve developed these sophisticated algorithms that predict the behaviors of these electrons as they drop through the layers well enough to exploit the physical process as a kind of a model that they can model the algorithms on. Frankly, the mathematics are way beyond me. But the upshot is that you can do these amazing things with the IBM Q computer.

The example that they’ve been using for a couple of years is a caffeine molecule. Now, on a scale of molecules, caffeine isn’t all that complex. But modeling a caffeine molecule is way more sophisticated than most regular supercomputers can do quickly. And that’s what a quantum computer can do. It can do these amazingly sophisticated, highly complex calculations pretty rapidly.

Now that was known. But what was new was that IBM is building a quantum computing firm up in Poughkeepsie. They’ve got 15 of these machines there so far. And they’re providing access to them, as they’ve been providing access individually to each individual quantum computer. They’re still providing access. It’s sort of like quantum as a service. And one of the interesting things was, I asked the researcher there if you can gang these. And I’m not even sure why. And he wasn’t even sure why you might want to gang them up. But they’re exploring ways to make that possible. He was explaining there’s new research that allows them to take the quantum activity and switch it over to photonic so that you end up with optical computing to gang the 15 Qs or more as you go forward. It’s really fascinating stuff.

And then… Now for something completely different from a quantum computer. I wandered over to the John Deere booth.

JUNKO YOSHIDA: That’s a huge machine they have!

BRIAN SANTO: I know! It’s enormous!

JUNKO YOSHIDA: It’s a spray machine?

BRIAN SANTO: That’s it. So I’m mostly an urban kind of a guy. And walking into that John Deere booth, looking at all this farming equipment, I had to learn a whole new vocabulary. Like, What’s that big thing over there? And it’s like, Oh, that’s a combine.

JUNKO YOSHIDA: Let’s start from there, right?

BRIAN SANTO: Yeah. They had to train me from the basics just to talk to them about it. So what that was there was a sprayer. And a sprayer is an enormous machine roughly the size of a sanitation vehicle that picks up your garbage in cities. And it has on either side these two long booms. I’d guesstimate maybe 30 feet long. And set along these booms at intervals of like maybe eight or nine or ten inches were these spray nozzles. And those are for applying fertilizer or herbicide. So they’ve had sprayers that have had cameras on the booms for a while now, but what they’re developing now– and they think they’re about two years away with this thing– what they’re developing now is that they’ll have each nozzle with a camera, and each camera will have its own processor.

Now John Deere is becoming a tech company. They identify themselves as a technology company now. And they feel their strength is developing algorithms. So what’s coming is an AI-based sprayer. And the idea is, they’ll be able to drive these sprayers through the fields, and at the speed they’re going, they’ll be each boom, each camera will be hovering over any given plant for roughly about a third of a second. And in that time, this AI-based camera will determine whether it’s hovering over a soybean plant or a weed, or a corn stalk or a weed. And it’ll have that third of a second to decide which it is and whether or not to hit it with an herbicide.

Now the value of this is that they figure that they’ll be able to target herbicides or target fertilizers. With herbicides, they think that they might be able to cut down the chemical application. Instead of just spewing herbicides over the entire field, they might be able to cut their herbicide application down by 90%, just cut the chemical application by 90%. And with fertilizer, similarly, by being able to target the plants, they’ll be able to reduce the amount of chemicals overall that farmers have to use in the fields. Save money that way.

NITIN DAHAD: There’s one more thing about that, actually. Because it’s not just about their reduction of the chemicals. When you look at the bigger picture: growing population, world population, how do you feed them? From my travels, I’ve seen quite a lot on sort of figuring out precision agriculture. One of the startups I saw in Taiwan last year was doing exactly that. And they’ve got contracts in Europe to enable people to make it more efficient to make food, basically.

BRIAN SANTO: Right. Right. I sat in on a session about using technology to make people’s lives better. And the speakers included someone from the World Bank, which is investing in this sort of thing. Someone from the government of Bangladesh. Another person representing the government of Columbia. And the idea is to use emerging technologies like AI, like 5G, to improve people’s lives, to find out ways to do that. And it’s really inspiring.

JUNKO YOSHIDA: In theory, I guess.

NITIN DAHAD: Actually, no. There is practice now.

BRIAN SANTO: So the examples they gave in Columbia: People in Columbia are slapping up homes, in some cases, using whatever materials are at hand, and the problem is that many of these homes are substandard. And it’s questionable if they can stand up in an earthquake or stand up if there’s a landslide, or even stand up if there’s a hard rain. So the government’s goal is to spend money to help bring up the standards of some of these homes. But the question is, Which ones? They want to target their limited funds on homes that really do need to be brought up to par.

So what they’ve done is, they’re sending out vehicles, sort of like the vehicles you see that do the mapping for Google Maps, and it’s a similar idea. And what they do is, they take video, they take imagery of all of the homes that they pass by, and then use that to build a map of the communities that they’ve gone through. And then they apply AI to evaluate what each individual home, what the materials were used in building those, get a sense of what the condition of each home is. And by doing that, they can take those maps that they’ve made and figure out where within those communities they should be targeting their investment money. That’s just one example.

Bangladesh, for instance, they’re dealing with climate change. It’s a very exceptionally serious thing for Bangladesh, because if it’s not checked, if climate change is not checked relatively soon, roughly a third of the country could be underwater in a couple of decades. And as it is, natural disasters that used to come every 100 years or 500 years, these are coming like every year, every other year now. Floods, especially. So what they need is an early warning system. So they’re trying to use modern communication systems for communications. They’re trying to use AI for predicting where natural disasters might strike and how severe they might be. And that’s all for the purpose of setting up early warning systems.

So there are some really practical applications of all these high tech systems being employed right now, today.

NITIN DAHAD: The UK could possibly be underwater as well if it goes as well. Some coastal areas.

BRIAN SANTO: Yeah. New York City, Los Angeles, Miami.

NITIN DAHAD: You were talking about the homes and analyzing the materials to make sure that they’re safe. That actually brings us back to some of the technologies we talk about in EE Times, because I’m seeing people using things like sound and the resonant frequencies of certain materials to identify them. And radar, Vayyar Imaging is doing a lot of that where they’re getting the signatures of various things so that then they can do full detection, they can do lots of other things. I don’t have any interest in them. That just came to my mind.

It is about using all the different technologies we talk about in, say, things like autonomous vehicles, but they’re actually being used in lots of other areas as well.

BRIAN SANTO: I wrote a story a couple of years ago about the City of Pusan in South Korea. And they’re on the Sea of Japan. They’re at the confluence of two major rivers. They’re up against some hills. So they’ve got flash floods, potential for tidal waves, potential for landslides. So what they’ve done is, they’ve installed sensors all over the city so that they can get early warning for natural disasters. They can warn their citizens that whatever natural disaster might be occurring. It’s things like that. They are modern applications of high technology. It’s incredibly useful, helpful, can actually save lives, and it’s invigorating and exciting to be able to report on all of these technological advances.

I encourage you to check out our reporting from CES, which includes articles from the entire EE Times staff, some great photojournalism from David Benjamin, and several podcasts. Peruse the web site at eetimes.com, or take a look at the handy-dandy list of links we have on our web page dedicated to this podcast.

Welcome, everyone, to the year 2020. And now let’s leave it. It’s time to enter the Wayback Machine to revisit some of the great moments in electronics history.

On January 14th in 1914, that day saw the first product roll off the first industrial assembly line. It was, of course, a Model T automobile produced by the Ford Motor Company. The assembly line was one of the great innovations of the industrial age. A hundred years later, it’s one of the fundamental enablers of producing everything from computer chips to Pringle’s potato chips – which I deliberately mentioned because I only just recently found out one of those odd pieces of trivia that I live for. One of the people who helped create the machine used to create Pringle’s was author Gene Wolfe, perhaps best known for his four-volume “Book of the New Sun” series.

And okay, we absolutely have to include this one. On January 11th in 2001, Dave Winer, then the CEO of Userland Software, was the first to demonstrate a specific tag for RSS feeds that would pass the URL address of a media file to an RSS aggregator. He called the tag “Enclosure.” He created it at the request of former MTV VJ Adam Curry, who was experimenting with what – at the time – was called audioblogging. Audioblogging is now commonly referred to as podcasting. And here we are.

This month, we’re also celebrating the birthday of HAL the computer, which in the film “2001” reported that it became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. His instructor was Mr. Langley, and Langley taught HAL to sing a song. If you’d like to hear it, you’ll have to watch the movie, because we don’t have the rights.

Finally, on January 13th in 1910, the first public radio broadcast was transmitted. The engineer behind it was Lee de Forest, whose Radio Telephone Company set up the transmission live from the Metropolitan Opera House in New York City. It was a performance of an opera called Cavalleria Rusticana, and it featured a tenor by the name of Enrico Caruso. The sound quality was said to have been miserable, but the broadcast radius was several hundred miles, reaching into Connecticut and heard by ships far out at sea. Here’s a separate recording of Caruso, captured also in 1910, singing an aria from that opera.

(ENRICO CARUSO)

BRIAN SANTO: That’s your Weekly Briefing for the week ending January 16th.

This podcast is Produced by AspenCore Studio and was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss.

The transcript of this podcast can be found on EETimes.com. You can find a new episode every Friday on our web site, or via your favorite app for podcasts. I’m Brian Santo. See you next week.

CES 2020 Day 3 Recap

BRIAN SANTO: I’m Brian Santo, EE Times Editor in Chief, and you’re listening to EE Times on Air. This is day three of our special series of podcasts reporting live from the Consumer Electronics Show in the Mojave Desert. In the past couple of years, the automotive industry has dominated CES, and this year it’s happening again.

In today’s episode:

Qualcomm made some headline news, announcing it is burrowing deeper into the automotive market. We’ve got an interview with the Qualcomm vice president in charge of the company’s automotive operations.

…did we mention that Qualcomm is getting deeper into the automotive market? One of our colleagues has test-driven a new Qualcomm-powered autonomous vehicle. We’ll report on that.

…also, live interviews with executives from Infineon and with Texas Instruments about adding autonomous functions to cars equipped with driver assist capabilities, also referred to as ADAS.

…an analysis of a novel approach for autonomous vehicles from Intel’s MobilEye unit…

…and finally, Toyota surprised show-goers expecting the company to talk about cars. The company skipped right past that subject and on to planning entire smart cities.

Some of the biggest news in automotive at this year’s CES was Qualcomm’s formal announcement of its “Snapdragon Ride Platform.” The new platform got instant credibility with the announcement that GM is Qualcomm’s new partner on assisted driving technology.

Qualcomm long ago established itself as a supplier for connectivity and infotainment systems in the automotive sector. The market for vehicle automation seemed like an obvious next step, and investors have been wondering if the company was going to take that step.

Qualcomm put an end to that question by rolling out the new Ride Platform, which it described as “scalable.” By mentioning that GM is now working with Qualcomm on ADAS, Qualcomm is also hoping to let the world that the world’s largest mobile chip company has a foot in the door in the ADAS market.

Junko caught up with Nakul Duggal, Qualcomm’s Senior Vice President & Head of Automotive Product & Program Management, right after the company’s press conference in Las Vegas. She asked him to break down what exactly the Snapdragon Ride Platform entails, what Qualcomm means by “scalability,” and whether the relationship with GM will have any impact on GM’s partnership with Cruise, which has a focus on fully autonomous driving.

NAKUL DUGGAL: We look at the automotive business in four ways. That is the telematics business and the CD direct business, which we’ve been in for a long time. That is a business that we understand quite well. We started the Snapdragon digital cockpit business about five years ago. And that is doing exceptionally well. Between the telematics and the digital cockpit business, we now have over $7 billion in our design pipeline. We have over 19 automakers. We are leading in many of these areas. So that business is going really well.

We announced two new areas to our portfolio today. The first one was the Snapdragon Ride platform, which is an autonomous driving– an ADAS– platform. It includes an SOC, an accelerator and the autonomy stack. And this is an area we’ve been working for a number of years, and we announced it finally. What is going to differentiate us in this space compared to competitors is the scalability. So we will address everything from Level 1 to Level 5. The efficiency of the platform that we are building from a power perspective, so these are much more efficient compared to competing solutions. And then the stack is a reference stack. You can use our stack, you can use components of the stack, or you can bring your own stack. If you don’t want to use ours, our platforms on the SOC and X-ray are completely open. So this scalability approach, to us, is a big differentiator. We saw a lot of success when we did the digital cockpit business with scalability. And General Motors has announced a partnership with us across all three of these domains: across telematics, infotainment and ADAS. We are very proud to see that relationship move forward.

And then finally, we announced a new service called cloud to cloud, which is essentially designed to be able to manage the vehicle from an automaker’s perspective often it has been deployed. So managing in terms of being able to make updates to the car, being able to unlock new capabilities, make flexible configurations so you don’t have to change the hardware. The only way that you change it is actually through the updates in software.

So these four areas I think put us in a very compelling position to have a very highly differentiated portfolio for our automaker and Tier One customers.

JUNKO YOSHIDA: All right. You know, keeping this conversation high level, what I’m gathering from various players in the automotive chip market is that, in the past, or even at present, a lot of car companies have been making what they call “band-aid” solutions. The new features coming in, they do a chip solution from one chip company and then another new feature comes in, Okay, we’re going to switch to another company or another Tier 1 who can provide us some new solutions. So this has been very fraught with so many variabilities. How do you think that your Snapdragon Ride platform can answer that question?

NAKUL DUGGAL: One thing that we have always done different is that, when we engage with customers, we don’t engage transitionally for one generation. We look at the requirements for the business that we are getting in across the board, for every Tier, and then we design our portfolio accordingly. With the Ride platform, very similar to the cockpit platform or the telematics platform. We understood the requirements all the way from the entry to the most advanced systems. And we have built the portfolio to be very scalable. We start from 30 tops that could address Level 1 requirements, all the way up to 700 tops for Level 5 and even beyond, if you need to get there. So that we understand that there is no reason for the automaker to fragment their partner base, their supply chain base, to different suppliers for different Tiers. And the reason so far automakers have had to deal with this is because they don’t find scalable platforms that address the needs at the right cost point, at the right power efficiency point, and really in terms of being able to deliver that scalability. These are very difficult solutions to deliver through any one supplier across the board. At Qualcomm, our strategy has been to be scalable always, and that I think is what is going to get away from what you define as the “band-aids” that have been so far implemented.

JUNKO YOSHIDA: All right. Well, let me ask you this: You did mention your new relationship with GM. I mean, you’ve always had a relationship with GM, but you are now expanding the relationship all the way to ADAS. How does that fare with the GM strategy to work with… You know, they’ve been working with Cruise on the autonomous vehicles, and if indeed the Snapdragon Ride platform can address all the way to 4+, Level 4+ or 5, what’s going to happen?

NAKUL DUGGAL: So let me make sure that we explain this quite clearly. The partnership that we announced at GM, we’ve had a partnership on telematics for a long period of time. We announced a partnership on the digital cockpit SOCs, and now also on the ADAS SOCs. The stack that are introducing for the Ride platform, that is a Level 2 stack. Level 2, Level 2+. And the GM partnership is really around making sure that we have the ability to work with a partner like GM across domains like telematics, but now also new domains like infotainment, as well as ADAS, to essentially get around some of the challenges that you were mentioning earlier, where you have to keep switching from one generation to another. Here our relationship is based upon the fact that we have scalable platforms. We deliver a level of capability and service to our customers, and as they have seen that we are a supplier that can be trusted, that they can rely on, that is meeting their commercial, their technical, their power requirements, they’re looking at our platform, and it’s something that makes sense. I cannot comment on the other programs that GM is obviously working on.

JUNKO YOSHIDA: All right. As a Qualcomm, when do you make Level 3, Level 4 stacks available to your potential customers?

NAKUL DUGGAL: So we have the Level 2+ stack available now, actually. And we are going to be continuously making updates to these Level 2 stacks, which will, in our belief, evolve to more capable stacks. We get asked the question, Do you support sensors like lidar, etc? We can support all sensors. The stack is capable of supporting all sensors.

We also keep in mind very carefully the cost of these sensors. So as the sensors get to a cost point where they become broadly available, we are happy to add any sensors. As far as when we get to the next level of stack capability, I think it will be a progression. We believe that the stack that we are delivering today is very compelling, and automakers the Tier Ones who were interested in it and start to work on it, and we will work with them to evolve the capability along with their own teams.

JUNKO YOSHIDA: Does your platform actually include some of the network capabilities? In-vehicle network, we’re talking about bringing in more data inside a vehicle. We’re talking about whole car, software upgrades, so on and so forth. You probably need a PCIE, all the gigabit ethernet kind of stuff. What sort of network processors are part of your Snapdragon Ride Platform?

NAKUL DUGGAL: So we support ethernet. We support PCIE on the Snapdragon Ride. We support all of the updates for all of these platforms. So really anything that is needed to run the platform on the automotive bus for local internet working, all of that is available. And then of course on top of that, any software that is needed, any drivers that are needed, any update solutions, any auto solutions, all of those are included as part of the platform.

JUNKO YOSHIDA: Okay. Very good. Thank you so much.

NAKUL DUGGAL: Thank you.

BRIAN SANTO: As if working with GM isn’t enough, Qualcomm burnished its ADAS bona fides by bringing its highly autonomous vehicle to Vegas. Jim McGregor, our friend at Tirias Research, got a chance to ride on it. And this is what Jim told us:

He said, “The ride was on a highway in Vegas. The car did well at merging, navigation and even avoiding an aggressive Camaro that cut us off and almost spun out. Overall, it was a comfortable ride, and the system appeared to operate very well,” he said.

Of course, other chip giants in the autonomous vehicle segment – such as Nvidia and Intel/MobilEye – had already been there and done that. The market is getting crowded.

Leading up to CES 2020, veteran automotive chip suppliers such as NXP Semiconductors and Texas Instruments also announced new chips. Both claim their respective product families will help OEMs and Tier Ones to modernize the current vehicle architecture. The goal is to enable car companies to do over-the-air upgrades, for example, to become software-upgradeable vehicles a la Tesla.

NXP calls its chip the Vehicle Network Processor. TI also announced a similar gateway processor and ADAS processor. Both companies are known for their grounded views on highly autonomous vehicles.

Junko sat down with TI on the first day of the official opening of the CES show floor. She asked TI what it takes to be a trusted chip supplier to the automotive industry.

The first voice from TI you hear will be Sameer Wasson, vice president and general manager of TI’s processor business unit. The other voice belongs to Curt Moore, general manager and product line manager for the company’s Jacinto processors.

JUNKO YOSHIDA: What are the basic, unique requirements to be in the automotive market as a chip company?

SAMEER WASSON: You’ve got to understand the need from a safety, reliability, longevity perspective. Bringing that together, expressing that in semiconductors, understanding the system, getting to those unique corner cases, those problems, that makes automotive very unique. And transferring some technology from a different market over here without the correct part, without the correct architecture, without the correct time over target and R&D, often leads to bad results.

JUNKO YOSHIDA: Let me inject myself. Talk about bad results! I mean, when we… Sameer you and I were talking about… you talked about this industry, meaning automotive industry, has a long memory. Tell us a little bit about that.

SAMEER WASSON: It has a long memory because this technology stays in vehicles for a very long time.

JUNKO YOSHIDA: “A long time” like 15 years?

SAMEER WASSON: Easy. Things will change. Some components change faster, some components will change slower. But there is technology in cars which stays there for a very long time. So when you’re thinking of it, a mistake done in one step can impact you for multiple years. And that mistake doesn’t necessarily have to be catastrophic. It could be a poor systems choice. It could be a sub-optimized system where the cost is not really where it needs to be, and that prevents it from scaling to where it needs to go and be accessible for everyone. So our mission in life is to make sure that when you’re making this technology, it does something which is scalable, accessible and no make any of these sub-optimum decisions which prevent us from getting to that mission.

JUNKO YOSHIDA: And they are unforgiven. Once you make a mistake, can you crawl back into the same OEM?

SAMEER WASSON: It’s not easy.

JUNKO YOSHIDA: Not easy.

SAMEER WASSON: It’s not easy.

CURT MOORE: And I think, as we’re looking… as the automotive market is changing, as now processors are moving into more and more mission-critical applications inside the car, this is becoming more and more important to really be thinking about the system and how you implement the safety for these mission-difficult systems and not retrofitting something.

JUNKO YOSHIDA: Not retrofitting it. You know, one of the things… I think Sameer, you were the one who mentioned that there are two schools of thought. Getting into this autonomy business. I mean, we’re talking about the Level 2 cars to the Level 4+ or Level 5 cars. There are two schools of thoughts. One is sort of a top-down approach from a bottom-up approach. Tell us the difference and where TI sits in that spectrum.

SAMEER WASSON: We definitely… The difference is, you can take a leap of faith and go build a general purpose computer and say, Let me just brute force and get to a high level of autonomy. We are not doing that. We are taking a more nuanced approach, using a heterogeneous architecture. So that we are using different components of processing to go solve specific problems. So if you need general purpose compute, we have general purpose compute. If you need specific vision accelerators, because they give you the lowest power, we’ve invested in that. If you want to do machine learning and signal processing, we’ve invested in that. So we definitely believe in a more bottom-up approach of building that technology and then letting it scale to as high as you want.

Bottom line is, safety is paramount. To make safety happen, you’ve got to have the technology accessible for the masses. And our technology enables you to do that because we don’t need fancy cooling, for example. We don’t need anything which requires you to then go say, How do I architect this differently because my machine is just bigger. We scale from really small to really big. But really big is important. But being able to scale is more important.

JUNKO YOSHIDA: Let me be a devil’s advocate. Some people talk about really there’s a beauty to the top-down approach because you, as you mentioned, the OEMs always want the next shiny object. So this year this might be focused on certain things, next year they’ll focus on something different, whether it’s a heads-up display or the audio thing or whatever. So the chip companies are put in a position to provide one solution at a time, which automotive companies want, but at the same time, that could end up (according to the top-down advocate) is that it’s going to be a band-aid solution. Band-aid after band-aid. So you eventually get a really unwieldy platform that you can’t really… The decision you made this year, five years from now could have been the wrong decision. What do you say to that?

CURT MOORE: I think the key thing that we’re really looking at is, How do we deploy ADAS to the masses? Deploying ADAS to the masses has a huge element of cost associated with it, a huge cost component to it. If you just start from the high end down, you’re not going to be able to go hit those costs points or enable those system cost points to allow car OEs to really deploy these ADAS features to their entire fleet. And that’s really where we can go impact safety, is by getting these ADAS features into lower-cost cars. Because when you think about it, those are going to be the largest number of cars often driven by people with less experience. You know, younger drivers, etc. And that ADAS functionality can really have a big safety impact. So by attacking those low-cost systems and getting that ADAS functionality and worry about it from a cost point of view is going to make the road safer.

SAMEER WASSON: To your point, top-down has its advantages if you’re looking at scaling to the top end of that market. Absolutely. What our strategy is, we want to get all of the market. And scaling up sometimes is easier than scaling down. Scaling up means you’re putting two of our solutions, and they’re software-compatible and the scale.

JUNKO YOSHIDA: I see.

SAMEER WASSON: Scaling down, once you have that infrastructure built into the chip, scaling down comes at the cost of either cost or power. Simple. It’s physics, right? So we have taken the approach of saying, Let’s take a level block approach to this and build it up.

The other thing is, no two OEMs are the same.

JUNKO YOSHIDA: Ah! That’s a good…

SAMEER WASSON: No two car lines in the OEM is the same. So there could be… Think of a very high end OEM. They make really, really good cars. But they have some scale. They go from your economy car to the highest end car. Your economy car may only have one SOC. And that is your front camera, that is your centralized compute sensor fusion, that is your automated parking. How do you get it to do that in that price point? It’s a very different price point.  Secondly, that power. They don’t have the budget to port fully. The same OEM may actually have that very, very high end vehicle. Now anyone catering to them, whether it is us or whether it is a Tier One, that software investment is the most expensive one. So now, we’ve got to come up… and that’s what, quite frankly, makes our job fun. How do you tackle this puzzle? And our strategy is, let’s go build level blocks. Let’s go build level blocks that, when put together, can get you the highest level of performance which the OEM needs. But make sure you’re catering to each of those segments in a thoughtful manner.

CURT MOORE: And the ones that are going to be most cost sensitive are going to be the lower-end vehicles. So there’s where we want to make sure that we optimize system bom, because that’s the one that’s going to be critical.

JUNKO YOSHIDA: Very good. Thank you so much.

BRIAN SANTO: In yesterday’s podcast, Junko interviewed NXP’s CTO Lars Reger about Ultra-Wide Band technology. She also asked him how NXP would compete with new entrants to the autonomous vehicle market. Reger’s response was, “We think we can completely complement each other.”

Recall, too, that Qualcomm was once planning to acquire NXP in hopes of expanding its business into the automotive market. The deal did not get consummated, but Reger said, “NXP can work with a number of other chip companies such as Nvidia, Karlay or Qualcomm. Their primary focus is on AI. We,” Reger said, “we’ve got the rest of the solutions ranging from vehicle networks to security/safety.”

Next, Junko sat down with Peter Schiefer, Infineon’s division president responsible for automotive. The various ways to implement autonomy in vehicles actually occupies a spectrum. If robotaxis are at one end of the spectrum, Schiefer sees a growing trend that carmakers will be bringing down “some use cases” of Level 3 and Level 4 cars into what used to be more run-of-the-mill ADAS vehicles.

JUNKO YOSHIDA: All right. I’m here with Peter at the Infineon booth, and we were just talking about how this whole autonomous vehicle market we’ve been hearing every year at the CES. And CES 2020 we see the air has changed. I mean, thank goodness all the full-fledged hype of autonomous vehicles has kind of subsided, right? And we’re talking about two different approaches for the autonomy. Explain how you see the different industry players approaching autonomy differently.

PETER SCHIEFER: Yeah, basically what I see is that the market is building up in two approaches, where one approach is more for the fully automized kind of Level 5 cars which will be starting from my perspective mainly in the commercial use case, where you can, for example, replace drivers. And with that you can also justify the technology which is needed in order to enable such a function. However, this number of cars will be limited to that commercial use cases. Then on the other hand side, what I see now with the complexity and complication about approvals of a Level 3 car, I see a lot of companies now taking single uses cases out of Level 3, Level 4 car and pull them into a Level 2 car. There is then an incremental technology… technology which is needed incrementally to add to the function, delivers a benefit to the car owner.

JUNKO YOSHIDA: Okay. Give me some specific examples of use cases of a Level 3 and Level 4.

PETER SCHIEFER: That use case could be, for example, a parking support. So if you want to park the car, then this parking function can be one. The second one could be a highway pilot.

JUNKO YOSHIDA: Oh, okay. So it’s totally autonomous driving.

PETER SCHIEFER: Yes. For a certain situation, for a certain period of time.

JUNKO YOSHIDA: Okay. And these did not used to be considered as part of Level 2, right?

PETER SCHIEFER: Right. In the classical definition, this was not considered. And I see it changing now, and you may even call it a Level 2+ when you add these kinds of use cases. And here it’s a lot about the sensors and also the dependable compute power to enable these functions.

JUNKO YOSHDA: All right. We’re talking about the general trend in the automotive industry. Where does Infineon sit? How do you enable it?

PETER SCHIEFER: Basically, in the autonomous car you are replacing the eyes, the ears and the brain of the human driver. And that’s why the semiconductor technologies are enabling that. And it’s all about very precise sensing. And we are a leader in the 77 gigahertz routing system. It’s about dependable compute power. And this is all about our functional safe microcontrollers. And as these cars are connected to the outside world, it’s also about cyber security. There is no safety without cyber security and defining hardware incas for protection against cyber security key. And this is a key at Infineon.

JUNKO YOSHIDA: Tell me a little about dependable compute power. What’s the downside? And how do you mitigate it?

PETER SCHIEFER: On the one hand, most people when they talk about autonomous cars, are talking about the big number cruncher. But there’s more than that. You need to have a fully fail operational and failsafe system. That’s why you need to have a functional safe microcontroller next to the number cruncher. And this is what Infineon provides. But it’s more than that. The whole system needs to be dependable. So for example, if you are driving very fast on a highway and the power supply gets disconnected, then the computer cannot calculate. So therefore, a very reliable and dependable power supply is key for that overall safety and security of the system.

JUNKO YOSHIDA: All right. Very good. Thank you so much.

BRIAN SANTO: Of course, for many of us covering the automotive sector at every CES, we don’t feel complete if we don’t attend MobilEye’s press conference. It’s traditionally a one-hour lecture on the latest technology advancements from Professor Amno Shashua, president and CEO of Mobileye, which is now an Intel company.

At a time when most automotive companies have been striving to achieve “redundancy” by fusing data from a variety of sensors – vision, radar, lidar and others – Mobileye this year discussed a way to create redundancy by using only photographic cameras, but running the incoming data through different types of neural networks.

Junko got help from Phil Magney, founder and principal at VSI Labs, to break down Mobileye’s new proposal.

JUNKO YOSHIDA: We just came out of Mobileye’s press conference, and Phil and I were talking about, it’s almost like having been in a classroom for one hour! So what was your biggest takeaway, Phil?

PHIL MAGNEY: Well, it’s hard to come out of a press event like that and not feel impressed. And I feel like it’s very authentic, and I feel like there’s a lot of science, a lot of very pragmatic information presented in this announcement, in this release.

JUNKO YOSHIDA: That’s true. It was less fluff, less marketing one-liners, but it’s more about substance. Okay, let’s break it down because there are a few things that surprised us, right? One thing was Professor Sashua talking about the use of cameras can actually go a long way in terms of driving towards autonomy. Tell me a little bit about what you saw in terms of their heavy use of cameras. How it has been involving.

PHIL MAGNEY: Yeah. I completely agree with you. Obviously, it’s a very camera-centric approach. It uses many cameras. But what I like about it is, we’re starting to see a diversification of AI algorithms used to go after a problem in multiple different ways. And that’s how you’re able to create a little bit of redundancy through the camera solution. So it was pretty impressive that even though they are still using cameras, through a couple of different neural networks they’re able to really kind of simulate what you could do with lidar.

JUNKO YOSHIDA: Yeah, some of the pictures they showed were interesting. So one stream they use a heavy use of cameras. But there is a separate stream if the companies choose to do so, they can bring in radar and lidar, and that could also provide the redundancy. Is that what they said?

PHIL MAGNEY: Yeah. I think basically it’s going to be up the customer, really. What’s the customer going to be comfortable with? I think the fact of the matter is, I think that you can do a terrific job of creating Level 2+ automation with a camera-centric solution. I think the proof is there that it can be done. Obviously another company that’s very successful with that is Tesla. But honestly, not every OEM is going to be 100% with that, and so they’re going to still probably want to use certainly radar and possibly even lidar as well if the prices come down.

JUNKO YOSHIDA: Let’s talk a little bit about a very impressive YouTube video he shared with the audience today. He was talking about driving in Jerusalem is Boston Plus, which sounds pretty deadly to me. But tell me, What did we see in that YouTube? What was so impressive about it?

PHIL MAGNEY: Well, I think basically they were showing that the ability to be agile when they are faced with a lot of situations… Like in that video, it’s showing a tandem bus, which is very, very big. So it occludes a lot. So you have to take that into consideration. And they’re showing how it’s coping with the pedestrian. But it’s also showing a little bit of assertiveness as far as its ability to be agile, because this is on the fact that if you are overly cautious with your AV stack, you’re going to be painfully slow. And that’s going to be bad for everyone.

JUNKO YOSHIDA: Who would pay for a robotaxi when it’s too slow to get there, right?

PHIL MAGNEY: Exactly. Exactly. Time is money and valuable, rather.

JUNKO YOSHIDA: So what was the timeline that Mobileye talked about today? He was talking about that we won’t get to the consumer AV until we nail down the robotaxi, right?

PHIL MAGNEY: One thing he made perfectly clear is that the robotaxi is really going to be coming in as a transportation, as a service, rather than any kind of consumer Level 4 or 4+ vehicles. It’s going to be led by a handful of companies that are going to deploy it as a commercial business, which has always kind of been my expectation as well. Again, I can’t recall the name off the top of my head, but they’re working with several companies on that and some new ones. Neil in China. Exactly. I think they’re making great progress.

And then of course the other segment is the evolution of ADAS into Level 2+, and they seem to have a very solid plan for that and a lot of customers lined up and programs in place for that, too.

JUNKO YOSHIDA: All right. Very good. Thank you so much.

BRIAN SANTO: During the briefing, Shashua talked about the importance of executing on the robotaxi business. He claimed that a fully autonomous “commercial” robotaxi today could cost between $10,000 to $15,000. But with enough experience and insights gained from the robotaxi business and Mobileye’s own development of new hardware, a “consumer” Automated Vehicle might be possible at less than– get this– $5,000 by 2025, he said.

Toyota traditionally has a splashy presentation every year at CES, and it traditionally talks about cars. But things change. Frequent EE Times contributor David Benjamin – Benji – attended the presentation.

So, Benji, you walked into a press conference expecting to hear all about cars and a smart cities press conference broke out. Is that right?

DAVID BENJAMIN: Yeah. It was the Toyota press conference. Akio Toyota, the chairman of Toyota, was the speaker. And in the last few years, of course, Toyota presented concept cars and talked about automated driving. And usually featured Gil Pratt, which is one of the really smart guys talking about autonomous driving. And one of the few who warned that it’s going to be a longer haul to get to the Level 5 family car than the industry really wants. But instead, Toyota San talked about building a woven city, which would be a smart city that interweaves all the elements of artificial intelligence from scratch in the area around Mt. Fuji in Japan, close to where my wife’s family lives. And I’m encouraging them to buy up real estate.

BRIAN SANTO: Good advice. So did he present this as a concept community? Or is Toyota actually building it?

DAVID BENJAMIN: That’s a good question. He introduced Bjarke Ingels, who’s head of the Bjarke Ingels Group, BIG, from Copenhagen, who is the architect who has drawn up these spectacular artist’s renderings of the place. I think that, in some respect or another, Toyota must have control over the turf, because they were presenting it as a future reality. But they also said that they’re going to build the whole thing virtually before they build it physically. And they were inviting investors and innovators and anyone inspired by living a better life in the future to contribute to the project. So I think that it’s not necessarily going to come around in the next six months or so.

BRIAN SANTO: So this isn’t exactly a complete and utter departure from vehicle technology and electronic vehicle technology. So they’re going to be talking about a smart city with all that entails– parking meters and home management and that sort of thing– but they’re also talking about some of the infrastructure about delivering things– delivering groceries, delivering people. So it looks like there is a vehicle component to this smart city. Tell us what Toyota discussed about that.

DAVID BENJAMIN: One of the things that Gil Pratt was talking about the last couple of years was the real immediate future for autonomous vehicles. It will be things like delivery vehicles and shuttles and trams that follow a fixed route, that essentially go in a loop from Point A to Point B, back to Point A again. And deliver things, move people around, pick up nannies, drop them off, things like that. And my impression of this woven city is that virtually all the vehicles on the streets where vehicles will be allowed will be these sorts of autonomous, loop-driven vehicles delivering things. I don’t think there will be any parking meters. And I think people’s private cars– assuming they have them– will be parked beneath or around these George Jetson condos that are going to go up. And if you’re commuting someplace else, going to Tokyo or Jigasaki or Yokohama, you’re going to be picking up your car in a garage and leaving a woven city and driving on regular roads. And probably not using an autonomous car.

BRIAN SANTO: We should mention first that this is the initial concept. “City” might be too grand a word to describe it, right?

DAVID BENJAMIN: Well, the initial population is going to be 2,000. So we’re talking about a smart village. Who knows how big it could get? Again, I talked about the fact that it’s close to Mt. Fuji, and if it gets big enough, they’ll probably have to tear down Mt. Fuji and build suburbs.

BRIAN SANTO: Or drill into it, one or the other, right?

DAVID BENJAMIN: Well, if you could get rid of those tourists who go up and down the mountain all the time, it wouldn’t necessarily be a bad thing.

BRIAN SANTO: Thanks, Benji.

And so we conclude our third day of coverage of the 2020 Consumer Electronics Show. This is our final podcast live from CES. We invite you to listen to our first two as well.

Our regularly scheduled podcast is our Weekly Briefing, which we usually present every Friday. Well, not this Friday. We’ll resume our regular schedule the Friday after next.

This podcast is Produced by AspenCore Studio. It was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss. The transcript of this podcast can be found on EETimes.com. Find our podcasts on iTunes, Spotify, Stitcher, Blubrry or the EE Times web site. Signing off from CES in Las Vegas, I’m Brian Santo.

CES 2020 Day 2 Recap

BRIAN SANTO: I’m Brian Santo, EE Times Editor in Chief, and you’re listening to EE Times on Air. This Day Two of our special series of podcasts reporting live from the Consumer Electronics Show in beautiful Nevada.

 

In today’s episode:

 

We’ve got an interview with NXP CTO Lars Reger…

 

…also, another live interview from CES Unveiled with an executive of Atmosic, which has created a nifty new Bluetooth device that harvests energy from its environment to power – well – all sorts of things…

 

…we’ve also got another live interview with the developers of a squishable portable speaker…

 

…and we have a quick recap of the press events held by AMD, which wowed the crowd, and by Intel, which… didn’t.

 

At last year’s CES, international editor Junko Yoshida interviewed NXP Semiconductors’ Lars Reger, who had just been named CTO of the company. Twelve months have gone by, we’re all back in Las Vegas, and Junko caught up with Reger again.

 

Reger has a track record of promoting boldly unorthodox projects. He pushed the development of CMOS radar chips long before it was obvious that radar rendered in CMOS would succeed. More recently, he became an advocate for the use of Ultra-Wide Band – or UWB – reviving a technology that years ago had been abandoned.

 

Reger has a habit of dreaming about the technically impossible. More important, he loves talking about his technical dreams. Junko asked him about his experience as a CTO.

 

LARS REGER: Last year, I started in December bringing together all the technical roadmaps of the different business units of the entire company and trying to find a common ground of these portfolios that we have. And the early dream in those days was that we are the company for all smart-connected devices that can basically sense the environment, think of smart advice, connect to the cloud if needed and send the smart advice to the arms and legs of the robot or the smart-connected device that you want to build. Adding safety and security to that, and you’re done. So if NXP could only be that company that can excel in all of these six technical sectors, we are unbeatable.

 

In the meantime, a lot has materialized, and in each of these six packets, we can show how we move the needle for the industry. That is already partially a dream come true for me so that this story really resonates, that we have ingredients to show what we can do. That we can walk the talk.

 

JUNKO YOSHIDA: Okay. I want to go down to specifics, because as I was telling you before, NXP’s announcement of UWB kind of surprised us because I used to think UWB was dead and why is it coming back, what it’s for? That’s something really unexpected to me, but as a layman. So tell me how you started to imagine or reimagine UWB. When was that? And for what occasion?

 

LARS REGER: The discussion on Ultra-Wide Band is already pretty also technically a new Ultra-Wide Band, of course, since its beginning of the standardization in the early 2000s as a communication technology. But then it lost against WiFi and was dead as a communication technology. I continued discussing with a couple of industry leaders on the key development side is how do we define the next generation of car key electronics. That always was a good discussion, but just only recently. We then came to the conclusion that, technically, it will work to use this technology to really remove the entire key ring in your pocket and not only use it for car access, but integrate it into smart portable devices– in your watch, in your phone– and access everything with an already standardized technology. So no one needs to redevelop, redefine it again. You can really go with that technology, integrate it into your smart portable devices and access the entire world around you.

 

JUNKO YOSHIDA: Right. But you started to imagine the original impetus was that, what if we use it for a car key? Right?

 

LARS REGER: You’re right. So the initial discussions were exactly on how do we make the car keys more secure, how do we make them smaller, how can it be integrated better and so on. The only discussion then came when we discussed, Yeah, but we can make car key handling much easier. You send basically a security certificate like a banking transaction to your mobile phone, and with that you have a two-day car key and so on. And then we discussed with a couple of technical dreamers as well, saying, Wouldn’t that work also for your hotel key? For your front door? For your garage door? Isn’t it annoying at the moment that you need to have one key per lock and you’re running around with 20 different keys. If you have the language for the key– Ultra-Wide Band, IEEE standardization already defined– and you use one of our secure elements that we’re using in passports or banking cards as the keyboard, and your combine those two and you integrate them into your smartphone or your watch, couldn’t this be your universal key platform for all things that you today use a key. And it was just basically a story-telling, dreaming activity.

 

Then of course a couple of hundred if not thousand people in our company, but also in the partnering companies, started getting active in this area and started innovating in that domain.

 

JUNKO YOSHIDA: Actually, I have never heard a CTO talking about story-telling. Story-telling is something that marketing or PR people always talk about. So tell us about the importance of story-telling within an organization as a CTO.

 

LARS REGER: Good point. We have 30,000 people in the company, 10,000 engineers in the company, and of course the company will only be a world champion if most of these people are working in an aligned way as a team. Know what the purpose is. So in other words, what you call story-telling is basically sharing a vision. And even if this vision is a bit far out and technically not realistic today, at least if you can bring a compelling story to the people, this will create thought leadership, or in other words, there will be 10,000 hopefully that are thought followers. And if you have thought leaders and thought followers, then suddenly you can start crafting a joint roadmap going forward. And people will tell you what is not do-able today or would could be do-able in the future and so on. But you are starting a conversation, and nothing else I’m doing. So it’s just trying to put a vision out there. And people tell me where it’s working and where it’s not working. But then using 10,000 brains and not my own to realize this.

 

JUNKO YOSHIDA: But also as a CTO, you have to have something to back it up. In other words, you use the word also “dreamer.” Dreamer is something that I don’t really associate with a regular engineer. Regular engineers always tell you, Oh, that’s not possible. We’re not going to do that. That’s kind of a regular engineering response to a sort of dreamlike project. So tell me about yourself. When you were a kid, you were talking about how you were a little strange because you weren’t just dreaming, you actually started to architect, I guess with a pen and piece of paper, architecture of a submarine at the age of seven?

 

LARS REGER: Yeah. That is a funny childhood story. Indeed, I started dreaming of building my own little tiny research submarine to see what’s happening underwater. That drove me later also in studying physics and doing my MBA and just trying to have a solid technical background. I mean, there is a difference between being overly pessimistic and realistic and every time telling what is not do-able technically. On the other side, being such a dreamer that you are so unrealistic that you are losing your technical followership. So the fine line in between having a solid technical understanding, knowing what could be possible, you don’t know how to realize it in the last instance, but what could at least be possible. And then trying to get the followership supporting you. That is of course the key ingredient. And for that you need to have a pretty good technical understanding, but at least physically is do-able.

 

JUNKO YOSHIDA: Got it. Thank you very much.

 

LARS REGER: My pleasure. Thank you.

 

BRIAN SANTO: NXP is at CES showing cybersecurity solutions, and it will also demonstrate how Ultra-Wide Band can be useful in any number of IoT applications.

 

CES Unveiled is an event held before the official opening of CES, where many of the stand-out products at the show are highlighted for a media crowd. In yesterday’s podcast, we spoke with Jim McGregor of Tirias Research, who had sifted through some of the products presented at CES Unveiled. One that caught his eye was an energy-harvesting Bluetooth device from a company called Atmosic. I circled back around to talk to the company.

 

SRINIVAS PATTAMATTA: Hi, I’m Srinivas Pattamatta. I’m the VP of Marketing and Business Development for Atmosic Solutions. We are a Bluetooth chip company. Two unique things about us is our Bluetooth is very, very low power– five to ten times lower than anyone in the market– and then we also have added a unique thing, which is energy harvesting to the Bluetooth chip. And as a result of that, you can do one of two things: A) you can actually extend the battery life five to ten years, or in some special cases, you can run without any battery and you can use any energy like RF, thermal, photo or motion.

 

BRIAN SANTO: Okay, so you’re showing us a PC, a laptop, here on display. Is this specifically for PCs? Or can it be any battery-operated device?

 

SRINIVAS PATTAMATTA: Our Bluetooth solution can be with any battery-operated device in the IoT market. Specifically on the consumer side, think of remote controls, wearables, form machination devices. On the IoT industrial side, think of beacons and tracking devices and so on.

 

BRIAN SANTO: That’s fantastic. So can you go actually battery-free with any particular devices?

 

SRINIVAS PATTAMATTA: We can go battery-free in many applications where there is enough energy and you’re not transmitting every microsecond. For example, think of an asset-tracking device in a hospital. You have 50,000 assets in a 1,000-bed hospital. And those don’t need to be tracked every minute. If you just harvest enough energy and send a beacon every hour, that’s good enough. Or think of a door lock that you use your phone to power the door lock and then it unlocks itself. The phone powers the door lock. Or think of a keyboard that is sitting in front of the laptop, and the wireless energy coming from the laptop can actually hook up to the keyboard and then actually harvest energy.

 

BRIAN SANTO: Are there specific energy types you can harvest? Can you harvest any type of energy?

 

SRINIVAS PATTAMATTA: Today we can harvest RF, thermal, photo and also motion. In the future, we are going to add others as well.

 

BRIAN SANTO: Okay. Can you think of one of the coolest applications that your energy harvesting device is enabling?

 

SRINIVAS PATTAMATTA: Yeah. Think of a switch that is connected to a Bluetooth socket. Now you can place the switch anywhere you want, and it doesn’t require any battery. And just the motion of turning on and off the switch will power the Bluetooth solution inside the switch.

 

BRIAN SANTO: Very cool. Srinivas, thank you very much.

 

SRINIVAS PATTAMATTA: Thank you so much.

 

BRIAN SANTO: After talking to Atmosic, a guy in the booth next door caught our attention. He was expanding and collapsing something that looked like a portable speaker as if it were an accordion. It turns out that’s exactly what it was. A speaker, not an accordion. I asked the guy doing the squishing to introduce himself.

 

GREGG STEIN: Hi, I’m Gregg Stein. I’m the CEO of POW Audio.

 

BRIAN SANTO: Okay, Gregg is holding what looks to be a speaker. It’s white. It’s roughly the size of a man’s fist. And he just squished it. So tell us why you got away with squishing your speaker and why you did that.

 

GREGG STEIN: Squishy speaker. I love that. Basically, we have the patent on an audio expansion technology called WaveBloom. WaveBloom allows a speaker to expand and contract. It’s founded by actually a dad and his son. His son is a guy named Pam, and he was actually going in and out of a professional hockey player, and he’s going in and out of the locker room looking for a great audio speaker, right? He couldn’t find one, so him and his dad, who’s an amazing designer, developed this audio expansion technology. You want to know where it came from? The inspiration came from one of those collapsible doggy bowls. You ever see those?

 

BRIAN SANTO: Yeah.

 

GREGG STEIN: Pretty cool, right? Well anyway, they took that, they connected it to an audio speaker, and that basically became the impetus for WaveBloom technology, which we now have the patent on.

 

BRIAN SANTO: And when you pop it out, that gives you a bigger sound chamber, right?

 

GREGG STEIN: As we say, “It’s all in the air.” Right? So on the back, it’s actually magnetic as well on the mode that I’m showing you right here. And basically you can put that on a golf cart or a refrigerator or anything like that. When you do that, you’re going to get an even bigger sound. Why? Because the sound, it’s all in the air. It’s resonating right out of the back of the speaker, and it creates like a plenum. You get a much bigger sound yet again.

 

BRIAN SANTO: Very cool. Thanks, Gregg

 

GREGG STEIN: Thank you!

 

BRIAN SANTO: We’ve got a roundup of even more things we saw at CES Unveiled on the web site at eetimes.com. That story is mirrored on our site dedicated to our CES coverage at ces.eetimes.com. The story is called, “CES Unveiled: Know Thyself, Groom Thyself.”

 

Yeah, while there was definitely some cool stuff on display, a lot of companies seem to be flailing for a reason to exist and are creating devices designed to measure stuff that does not need measuring. Some of those devices are harmless, like toothbrushes, but others seem to us invasive and/or creepy. Read the story and see if you agree.

 

CES is all about consumer electronics: TVs, toy robots, coffee makers, car stereos and the like. But years ago, companies that provide enabling technology started participating as well. CES is now an important platform for chip companies, who tend to make big announcements during the two days of press conferences prior to the official opening of the CES show floor.

 

Yesterday, AMD and Intel were prominent among the presenters. EE Times European correspondent Nitin Dahad is here in Las Vegas with me and Junko, and here he is covering AMD’s big announcements yesterday.

 

NITIN DAHAD: AMD CEO, Lisa Su, sounded ebullient in her press keynote at CES 2020, as she promised to deliver the best-ever experience to gamers and creators with the announcement of four new desktop and mobile GPUs. This included the world’s first 64 core high-end desktop processor, the Threadripper 3990X which, to use a very English expression, knocks the socks off anything else in terms of graphics performance for high definition video rendering without any tear or stutter. The new mobile processors, the AMD Ryzen 4000U series (again, which is on the 7 nanometer process) feature up to eight cores and 16 threads and she said, deliver disruptive performance for ultra-thin laptops within a configurable 15 watt power envelope. Of course, there were a number of laptops she also announced from Dell and ASUS and others.

 

BRIAN SANTO: In general, the response to AMD’s announcements were positive. There’s general acknowledgement the company is closing the gap with Intel.

 

Intel, on the other hand, had its EVPs, Navin Shenoy and Greg Bryant, who were enthusiastic, as were a few of their guest speakers, but after all was said and done, most of the excitement was generated by Intel’s canned playlist of ultrahip modern pop and almost painfully effervescent interstitial music. In terms of announcing new silicon though, Intel was underwhelming.

 

The company celebrated the integration of artificial intelligence directly in its Ice Lake processors, which had been previously announced, and it touted the integration of WiFi 6, which had been previously announced, and its integration of graphics processing, called Xe, in its new generation Tiger Lake processors, which had previously been announced. Bryant triumphantly held up examples of both the chip and a compact board featuring the chip, suggesting that Tiger Lake will be found in products commercialized in 2020.

 

We asked Tirias Research Analyst Kevin Krewell what he thought about Intel’s presentation. He noted that Intel focused heavily on mobile, and on AI in PCs – even the long awaited Xe graphics was positioned for mobile, he noted. Both AI and Xe graphics are integrated in Tiger Lake, and in a discrete manner as a graphics processor unit that the company designated the DG1. The DG1, Krewell said, is going to disappoint some people who were looking for an Nvidia-killer GPU, however.

 

That said, the Tiger Lake processor was at the heart of a laptop demo that was pretty nifty. It was powering what Intel called the world’s first 17-inch foldable OLED PC. The two screen halves can operate independently, or they can be combo’d into one seamless 17-inch screen.

 

There was also a cool presentation on a technology that Intel first talked about two or three years ago. Intel VP James Carwana from Intel Sports showed off the company’s motion-capture technology, which captures motion not in 2D with pixels, but in 3D, with what Carwana referred to as “boxels.”

 

The idea is to capture a sporting event once; do some massive, furious processing; and then stream graphic reproductions from just about any viewpoint. Carwana showed a football game as an example – an actual game played earlier this year in which the Cardinals beat the Browns 38-24. He showed a stream from the end zone, one from overhead, one from the quarterback’s point of view, and one from the point of a safety – all reconstructed by computer. Intel worked on speed of reproduction first. It can deliver up to 60 frames per second pretty quickly. The next stage is to improve graphics quality, which currently falls far short of what gamers are used to in Madden Football, for example.

 

As tantalizing as all that might sound to sports fans, they won’t see this any time soon, however. Carwana said his group still needs six times more power to get the graphics right. Bryant promised to deliver it though, eventually.

 

And that’s where we conclude our second day of coverage of the 2020 Consumer Electronics Show.

 

EE Times On Air is doing a series of special podcasts live from the Consumer Electronics Show, with an episode yesterday, this one today, and another tomorrow. We’ve also got coverage on a special site set up specifically for the CES 2020 show. Find it at ces.eetimes.com.

 

Thanks for listening, and check back with us tomorrow for more from CES 2020 in Las Vegas.

 

This podcast is Produced by AspenCore Studio. It was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss. The transcript of this podcast can be found on EETimes.com. Find our podcasts on iTunes, Spotify, Stitcher, Blubrry or the EE Times website. I’m Brian Santo.

CES 2020 Day 1 Recap

BRIAN SANTO: I’m Brian Santo, EE Times Editor-in-Chief, and you’re listening to EE Times On Air. This is a special edition of our podcast, with reporting live from the Consumer Electronics Show in fabulous, fabulous, fabulous Las Vegas!

The kickoff events every year for the Consumer Electronics Show include CES Unveiled, where the show organizers highlight technologies that they consider particularly noteworthy. And an overview of consumer market trends presented by the Consumer Technology Association, or CETA. Let’s get to it.

CES Unveiled is an odd distillation of the overall show. CES at large is spread out over multiple huge convention center halls and spills out into the convention facilities of nearby Vegas hotels. The sheer size of this show can be bewildering, but at least it’s organized. Audio systems here, video systems there, smartphones over yonder, agricultural electronics in the back forty. CES Unveiled, however, is a themeless grab bag of all of the above and more, smooshed together in one vast hotel meeting room which, year after year, is never properly air conditioned. On the plus side, the grub at the event is pretty high class, especially compared to the hash that conventions normally sling at the media. Reporters have no time to be picky about food at these events, so it’s notable when we’re being fed well. On the other hand, it’s also possible that reporters have no inclination to ever be picky about food, but that’s an issue for anthropologists.

Even a “best of” distillation of CES is enormous and impossible for one person to cover thoroughly. So we resolved to divide and conquer CES Unveiled with our buddies Jim McGregor and Kevin Krewell from TIRIAS Research.

Okay, so we just walked through CES Unveiled, where they show the hippest, coolest, newest stuff. And Jim has been crawling through it for plus two hours, and he’s going to tell us about the coolest things he’s seen in there.

JIM McGREGOR: Well, it’s kind of interesting, because I’ve been through it two hours and I’ve only gotten through half of it. Some of the same old stuff you kind of expect: applications, you know, smartphone handles and stuff like that. But there are a few really cool companies that are doing some cool stuff there. There’s one chip startup that’s doing a Bluetooth solution, Atmosic.

BRIAN SANTO: Yeah, so it’s a Bluetooth energy harvesting, right?

JIM McGREGOR: Not only doing Bluetooth, very low power Bluetooth, but they’re doing energy harvesting with RF signals, with light, with thermal and with motion. So actually theoretically, you could have a Bluetooth connectivity connected to a sensor with no battery.

BRIAN SANTO: I talked to this guy, he said that you can even like create like a switch, like a light switch. You can just plug it in anywhere and just the process of switching will power it enough to actually go, right?

JIM McGREGOR: Oh, yeah, yeah. And like motion and anything like that. Even heat. So no, it’s a very, very interesting solution. The company’s been around for a couple of years, but this whole energy harvesting is kind of a new aspect on it. And I only know of one other company that’s trying that. So this is… I think that’s a really cool area, especially for IoT devices.

BRIAN SANTO: So if it works, it’s a bit of a game changer, right?

JIM McGREGOR: Oh, absolutely. You know, when you’re thinking about every place you’re going to be embedding sensors and you want wireless connectivity, even in a home. Think about the possibility of putting sensors in just about anything and having connectivity. It could be a huge game changer. Not to mention wearables, medical, you name it. There’s a whole bunch of applications for it.

BRIAN SANTO: So what else did you see while you were walking around inside of CES Unveiled?

JIM McGREGOR: Well, I’ve been looking at some of the products, and I’ve seen a few cool ones. There’s one company, Max Pro, that’s actually got a portable exercise system that actually fits in your backpack, and you can do hundreds of exercises with it and connect it to, obviously, your smartphone to actually track everything and everything else.

BRIAN SANTO: So the electronics element, is it just the connectivity with your phone so that you can track? Or is there some other electronic element?

JIM McGREGOR: Actually, there is another electronic element, and that’s in the system itself. The system itself only weighs nine pounds but has to simulate being able to do 150 on both sides. So it actually has to simulate the tension. So there’s actually electronic control units inside the device.

BRIAN SANTO: Oh, that’s wicked cool!

JIM McGREGOR: There’s also a company that’s actually doing… and they’re actually called Vinyl Recorder. They’re actually doing vinyl recording where you can transfer a CD audio to vinyl.

BRIAN SANTO: That’s really wicked cool! So do you have a turntable?

JIM McGREGOR: I have a turntable, I have hundreds of records and I have a 17-year-old that’s a rock DJ. So yeah.

BRIAN SANTO: How about you, Kevin? You got a turntable?

KEVIN KREWELL: I do. In fact, I recently got a disk cleaner to clean my vinyl. I’ve got lots of vinyl.

BRIAN SANTO: Beautiful. Somebody just gifted me a 180 gram version of The Beatles’ “A Hard Day’s Night.” I am so happy. It’s beautiful. So we talked about CES Unveiled. Do you have anything else?

JIM McGREGOR: There’s one other one I saw called Zero Energy, and they’re kind of in the early stages, but they basically have a little plug in. So you plug your plug from your appliance, your light, your TV, whatever, into this, and it plugs into the wall. Now, doesn’t seem like much, but it’s basically your surge protector. You know the one that you plug all your electronics into that you’re supposed to turn off so it doesn’t suck energy but nobody ever does. Well, this automatically does it. It just senses the power level so that it actually turns off the energy to all your appliances when you’re not using them.

BRIAN SANTO: Very cool.

KEVIN KREWELL: How does it know when to turn it back on?

JIM McGREGOR: Whenever it senses that there is a power surge or a request.

BRIAN SANTO: All right! So this CES in the past… last year it was pretty heavy automobile-oriented. And from what we were talking about earlier before we actually turned on the recorder, it’s going to be that again this year, right?

JIM McGREGOR: Oh, absolutely. Over 50% of the press conferences over the next two days are automotive-related. And we’ve already been briefed on some of the announcements coming up. CES has become the primary show for automotive technology, especially with electric vehicles and autonomous vehicles coming to market.

BRIAN SANTO: So that’s kind of… it kind of is a sign of how the automobile industry is changing, not necessarily the consumer electronics business, right?

JIM McGREGOR: Absolutely. I mean, you gotta think about it. The electric vehicle is really not much more than a smartphone. It has sensors, it has processing, it has a battery pack. It’s a smartphone. And then when you think autonomous vehicles, basically a super computer. It’s the smartphone with a bunch more sensors and a lot more processing.

BRIAN SANTO: And wheels. Kevin, you actually went and saw Byton today. Tell me what you saw there.

KEVIN KREWELL: Speaking of turning your car into a smartphone, Byton turns it up to 11. The car has a huge display across the entire front cockpit, as well as there’s a tablet that’s mounted right to the steering wheel, with soft functions that can change over time. The cockpit display can be used for not just information like speed and that, but it also will put up other personalized information like your appointments for the day. As well as, if you’re stopped, it will allow you to go into office mode, and the office mode allows you to do video conferencing from your dashboard with multiple sources. Built in is 4G that they can upgrade to 5G. The car customizes itself to you when it recognizes you when you get in the car. It maps who you are, gives you a personality, your charts, your appointments, places you’ve been to and where you want to go to.

BRIAN SANTO: And it can actually shift the seating arrangement, right?

KEVIN KREWELL: Yeah. You can turn it around like a bucket seat. I mean, it’s actually set up for autonomous driving. But it’s not a true autonomous driving solution yet. This is the first generation part, the M-Byte, which is… even the name is b-y-t-e, like bytes as in digital. So I mean they’re really telegraphing that this is a completely digital platform that they are doing. It’s obviously an electric car, and most of the manufacturing is in China, but they partnerships with companies in Japan, partnerships with companies in Korea. The funding comes from all over the place, but I’d have to say most of it did seem to come from Japan. But they are looking at it as an international company. They have a facility in California, Santa Clara, and they have a design area in France. So they really consider themselves an international car company.

BRIAN SANTO: And Jim, you actually got to ride in an autonomous vehicle, right?

JIM McGREGOR: I did. But the announcement doesn’t come out until tomorrow.

BRIAN SANTO: So we’ll cut that, right?

JIM McGREGOR: No, no, no! Actually, this is interesting. Just going over the list, talking about automotive press releases, I mean, ZF, Bosch, Snyder, Continental, Qualcomm (yes, Qualcomm), Valejo, Toyota, Forencia (I think that’s right), Hyundai Motors are all going to be talking about automotive announcements tomorrow. So one of those companies has an autonomous vehicle here that I’ve already ridden in. I can’t tell you who.

BRIAN SANTO: Other products I saw at CES Unveiled included an electronic braking system for in-line skates. Electronic headsets designed to help fitful sleepers get a better night’s rest. Sleep devices is now a well-established category at CES. And I also saw a portable speaker that is roughly the size of two smartphones stacked on top of each other, but it expands sort of like one of those silicone kitchen colanders so that it’s roughly four or five smartphones thick. Having a larger cavity creates greater resonance for more ample sound.

But we know what engineers really like. I found the smallest combination digital multi-meter oscilloscope I have ever seen. I met the CEO of Pokit Innovations, and I asked him to introduce himself and tell us about the product.

PAUL MOUTZOURIS: So my name’s Paul Moutzouris. I’m the founder of Pokit Innovations.

BRIAN SANTO: So what we’ve got from Pokit Innovations are a couple of portable multi-meters. And the first one that I’m looking at, the first one that’s hit the market, is roughly the size of a luxury wristwatch. The other one is kind of a large pen. Tell me about the first one, which you’re already selling, and then I’ll ask you about the new one that you’ve got and the differences between.

PAUL MOUTZOURIS: Yeah, sure. So the first one’s called Pokit Meter. And what it is, is it’s a multi-meter oscilloscope and logger all in one. So it’s not just a multi-meter, it is a full-featured multi-meter with AC/DC current voltage. But as well, you can display the waveforms, pinch and drag those waveforms. So what it is, it’s really small. It has retractable leads. It allows you to take measurements. And it connects to your phone. So the measurements and the waveforms are displayed on your phone. It can also be a logger for up to six months. So you can put it away, take your phone away, come back later and retrieve all the data, upload it to the web.

BRIAN SANTO: And it’s a Bluetooth connection to the phone, right?

PAUL MOUTZOURIS: Absolutely. Wireless Bluetooth. The device is fully wireless, and its wireless connection to the phone, which is also wireless, so you’re completely portable. You can measure things anywhere, which is really what it’s about. But the new version, which is called Pokit Pro, which we just finished a Kickstarter in October, it raised 750,000 or more, and it was the highest funded DIY project in Kickstarter history, which we’re very surprised about I suppose. But I guess we realized it’s found a little bit of a place in the market. Its main difference is, it’s a full 600 volt Cat 3, where the original is a low voltage. The second version, the Pro, is also multi-channel and has some additional measurement capabilities. That’s why it’s a little bit bigger, because it does have to be able to be sticking in the mains power point. But other than that, it’s still fairly compact.

BRIAN SANTO: So the first one, the smaller one in the Pokit meter, that’s… what kind of battery power is that? And what’s the difference with the Pro?

PAUL MOUTZOURIS: So the Pokit meter has got a battery cell in there, like you get from the supermarket. The Pokit meter, because it’s got real-time acquisition and multi-channel, it needs a little bit more power. So we made it rechargeable. It’s a more premium product, it’s more of a professional product. But look, both of these products are equally as accurate, it’s just the measurement range is higher on the Pro. And what they’re really doing, if I could explain, is they’re consumerizing something which is traditionally only for engineers. Big, bulky, expensive equipment: multi-meters, oscilloscopes, big bricks of things that you can’t take away from your bench. You’re stuck there. You’re limited in your creativity. Now you can go anywhere with these guys, and they ft in your hand, you can put them in your pocket, and you can really take your creativity to new places.

BRIAN SANTO: Okay, let me get into some of the nuts and bolts. Can you describe the processing power that you have in the meter? And then what the difference would be with the new Pro that’s coming out.

PAUL MOUTZOURIS: So look, the processing is actually equivalent in both of the units. The difference is that the Pokit Pro, because of its high voltage Category 3 ratings, has to have a lot more front-end on it to deal with the transience in the high voltage capabilities. The Pro also has acquisition buttons, and it has a torch integrated in there. So it’s not so much about the processing power. The measurement core is similar. It’s just that the Pro has a lot more bolted on to give it that extra accuracy and range that it has.

BRIAN SANTO: So how accurate is it, and what gives it the accuracy?

PAUL MOUTZOURIS: So to give you an example, we had a… some of our backers in our original Pokit Meter product are more fanatical than we are. And what they actually did was, one guy took it and pitted it against a Seven Series Fluke. So I think that’s a $700 multi-meter. It’s a premium, right? He tested every single range. I mean every single range. And he came back and said it’s actually better than the Fluke. Now obvious the Fluke is doing high voltage, which is what the Pro does, but the meter in its range was better than the Fluke, even though the Fluke is probably about ten times the price.

BRIAN SANTO: Okay, so let’s talk about price then. The Pokit meter is how much? And the Pro will go for how much when it comes out?

PAUL MOUTZOURIS: Well, at the CES show, we are providing a promotion, which means that you can get the Pokit Meter for 71 US dollars shipped to your door. You can actually get that on our shop right now. The Pokit Pro, it’s the first time we’ve shown it in public. We’ve just sold 10,000 pre-orders. We’ve got more pre-orders going. It will ship in June. The development team is still finessing that. It’s selling for $95. So they’re both under $100. And the Pro, I can assure you, will be as accurate as the Fluke, which is at $700. And it’s got the waveforms, don’t forget that.

BRIAN SANTO: Fantastic. Thanks, Paul.

PAUL MOUTZOURIS: No problem.

BRIAN SANTO: Nitin Dahad is based in London, but he’s also here at CES. He sat in on the CTA’s annual review of upcoming technology trends. Here’s Nitin’s report.

NITIN DAHAD: At the opening of CES 2020, we heard from Steve Koenig and Leslie Rohrbaugh of the Consumer Technology Association, what are the key trends to watch for in 2020. And some things were not surprising for EE Times’ audience. Others may be something new. So in summary, he talked about more intelligence in devices, electrification of vehicles and digital health being the top tech trends to watch for. So digging deeper into that, I think we’ve always talked about embedded intelligence because of the vast amounts of data and a lot of artificial intelligence that needs to go into devices to sort of make sense of that data. And so I think what they’re saying is, there’d be more connected intelligence and more consumerization of AI. In other words, a lot more of this going into everyday objects. And an example was, in CES Unveiled I saw an AI toothbrush from Oral B. So, you know, what Steve was saying was, the last decade was about the Internet of Things. But now, we kick off a new decade defined by the Intelligence of Things. And that really says it all.

But the other thing that he was already talking about was transport and mobility. And he said electrification is going to be the big thing, both in 2020 as well as over the next decade, just because of new innovations in battery technology and charging infrastructure and charging models, business models rather. So he was saying that we’ve really come to a point of inflection in electrification and electric vehicles. So that’ll be the big story for CES 2020.

The other big one is digital health. I’ve seen, as you’ll see in the CES Unveiled reports, a lot of smart wearables for health and health monitors and blood pressure monitors. So that’s going to be the other big thing.

Overall, I think what we’re seeing is really all playing into more connectivity with 5G and also enabling of more intelligence in lots of devices.

BRIAN SANTO: So there’s our rap for the first day of the 2020 Consumer Electronics Show. EE Times On Air will include daily podcasts from the Consumer Electronics Show, with episodes today, tomorrow and the next day. We’ve also got coverage on a special site set up specifically for the CES 2020 show. Check it out on ces.eetimes.com. That’s ces.eetimes.com. Also this week, we’re going to skip our weekly review podcast, which we normally do on Fridays. The weekly review will resume the Friday after next.

Thanks for listening today, and check back with us tomorrow for more from CES 2020. This podcast is produced by AspenCore Studio. It was engineered by Taylor Marvin and Greg McRae at Couple Studios. The segment producer was Kaitie Huss. The transcript of this podcast can be found on eetimes.com. Find our podcasts on iTunes, Spotify, Stitcher or Blubrry. I’m Brian Santo.

Ad block