Ad block

Weekly Briefing: Friday, January 17, 2020

BRIAN SANTO: I’m Brian Santo, EE Times Editor in Chief, and you’re listening to EE Times on Air. This is your Briefing for the week ending January 17th.

In this episode…

A company called Prophesee has developed a completely new way to capture video with what it calls an event-based sensor. At the recent CES show, we caught up with Prophesee’s CEO, Luca Verre. Today you’ll hear our interview with him.

Also, the Consumer Electronics Show. It’s vast. CES 2020 was last week. EE Times editors saw more products and technologies, and sat in on more sessions, than we had time to write about. We got together to discuss some of the most fascinating things we saw at the show, including the Prophesee event-based sensor, autonomous boats, data privacy chips, quantum computers, smart toilets, automated cocktail shakers, farm equipment, AI-powered toothbrushes… and more!

Prophesee is a startup based in France that is one of the companies pursuing an interesting new twist on video cameras. It’s unconventional enough to merit a brief explanation before we move on. The approach is called event-driven sensing, and it is fundamentally different from the way moving images have been captured since the beginning of motion pictures and television.

For more than a century, film and video cameras have captured a series of still images, one after another, at brief intervals of time. Displaying the video depends on flickering through each still frame, one after another; once you get past a certain minimum frame rate, human eyes perceive the progression of images as uninterrupted motion.

The basic idea behind event-driven sensing is to capture and record only what has changed in the scene in front of the image sensor from one moment to the next. EE Times has been interested in this company’s technology because event-driven cameras could offer one of the answers, if not all, to the fundamental problems this industry is facing: how to handle big data inside advanced vehicles or any other connected, embedded devices.

So – Prophesee was at last week’s Consumer Electronics Show, and international editor Junko Yoshida caught up with the company’s CEO, Luca Verre.

JUNKO YOSHIDA: You’re showing some interesting demonstrations here, so I just want to get the lowdown from here. One of the first things that I saw when I came in… What were you showing? That was an HD version of your event-based sensor. So tell me a little bit about that.

LUCA VERRE: We have a new sensor generation, which is an HD sensor, so one million pixels, 720p. This is the result of joint cooperation we have done with Sony, which will be published at ISEC in February in San Francisco.

JUNKO YOSHIDA: I see. So right now what you have as a commercial product from Prophesee, that is VGA-based.

LUCA VERRE: Yes. The commercial product we have is a VGA sensor. It’s in mass production. We are currently deploying shipping for industrial applications.

JUNKO YOSHIDA: This is sort of an interesting development here. ISEC is essentially taking your paper. So we’re not talking about commercial plans, I understand. But obviously you guys had worked together. It’s a joint development. It has been going on some time to produce this paper, right?

LUCA VERRE: Yes indeed. There has been some research work done together with Sony. Yes, Sony is indeed interested in event-based technology, but unfortunately I cannot tell you more than that.

JUNKO YOSHIDA: In terms of actual applications, they are keeping mum. They’re not telling us. All right. Or you can’t tell us. All right. So why do you think that HD is more important than VGA? I mean, in what sort of instances do people want HD?

LUCA VERRE: HD is important because, of course, increasing resolution enables us to open more doors to applications to look for simple farther, to farther distance for automotive applications, for example, as well as for some industrial IoT applications. Also, one of the main challenges we have been solving, moving from the VGA sensor to the HD sensor is the capability now to stack the sensor, to use a very advanced technology node that enables us to reduce the pixel pitch. So to make actually the sensor much smaller and cost-effective.

JUNKO YOSHIDA: So can I say automotive is one of your key markets that you’re gunning for?

LUCA VERRE: Yes indeed. Automotive remains one of the key verticals we are targeting, because our technology, event-based technology, shows clear benefit in that space with respect to low latency detection, low data rate and high dynamic range.

JUNKO YOSHIDA: And what are you hearing from OEMs about what problems they really want to solve when they decide to work with you?

LUCA VERRE: When both OEMs and Tier Ones work with Prophesee and event-based technology, the main pain point they’re trying to solve is to reduce the amount of data generated in both Level Two, Level Three or Level Four or Five type of applications. Because in both cases they’re looking for redundant solutions or multi-model systems that generate a huge amount of data. So by adding an event-based sensor, they see the benefit of reducing this bandwidth by a factor of 20 or 30.

JUNKO YOSHIDA: Yeah. You said something interesting. You say that the car companies can probably continue to use frame-based cameras, but your event-based camera can be used as sort of like an early-warning signal?

LUCA VERRE: Yes. Because our sensor is always on, there’s not effectively a frame rate. And it’s capable, with low latency, to detect relevant regions of interest. Then we can use this information to steer the attention of all the technologies, maybe lidar or radar.

JUNKO YOSHIDA: Right. So the object of interest can be spotted much faster. Okay. So the one thing that I took away from one of the demonstrations you showed me was that comparison the use of frame-based camera versus event-based cameras in terms of AEB, automatic emergency brake. And I think you mentioned that… What was the… American Automotive…

LUCA VERRE: American Automotive Association. They published some numbers showing that, in the case of AEB, in 69% of the cases, the current systems fail when it comes to detecting an adult, when it comes to detecting actually a child is 89% of the time it doesn’t fail, and most of the time it actually fails. So it’s still a great challenge despite the fact that actually AEB will become mandatory in a few years from now. So we did some tests in some controlled environments with one of the largest OEMs in Europe, and we compared side by side the frame-based sensor with an event-based sensor, showing that, while the frame-based camera system was failing evening in fusion with a radar system, our system was actually capable to detect pedestrians in both daylight conditions and night light conditions.

JUNKO YOSHIDA: Nice. What’s the cost comparison between event-based cameras versus frame-based cameras?

LUCA VERRE: The sensor itself is very similar, because it’s a CMOS process, so the cost of silicon is sold in volume. It’s the same price as a conventional image sensor. The benefit that you actually bring is more the processing level, because the reduction of the amount of data comes with the benefit of lower processing power.

JUNKO YOSHIDA: Right. So they have the impact for the rest of the system. Right. Very good.

BRIAN SANTO: Prophesee is one hot little startup. Back in October, the company raised another $28 million in funding, bringing its total to roughly $68 million. Among Prophesee’s strategic partners are Renault-Nissan and Huawei.

International correspondent Nitin Dahad, roving reporter David Benjamin, Junko, and I were all at the recent Consumer Electronics Show, roaming the halls, ferreting out the most interesting new technologies and stories. We all covered a lot of ground, and we saw a lot of things we didn’t get to write about for one reason or another. So Nitin, Junko, and I got together on a conference call to go over some of it.

So, Nitin, tell us what the experience of CES 2020 was like.

NITIN DAHAD: Wow. Okay, so it was my first CES after ten years, and boy was it different. It was big. And when I entered the hotel, I thought, where is the hotel lobby? This is just a casino.

BRIAN SANTO: Yeah. That’s Las Vegas, isn’t it? There’s always a casino in the lobby. So, Junko, you’re a veteran of CES. What was your CES 2020 like?

JUNKO YOSHIDA: I wouldn’t want to divulge how many years I’ve been to CES! Many, many years. Decades, actually. This CES was interesting. Ever year, we see a whole bunch of new stuff, but I think this year it sort of… the crowd was just a big as ever, but I think I picked up a few trends that are going to be important for the next ten years.

NITIN DAHAD: I would also to like to just say that I went straight in, obviously, on the Sunday to the Tech Trends and then the CES Unveiled. I have to say, there’s a whole display of what is possible and what is not possible I guess at CES. It was actually hard work and fun to look at all this. Because being sort of in technology for so many years, it’s really nice to see some of those end products.

BRIAN SANTO: I agree, but it is a grueling show. Because it’s just so immense. There are three huge halls in the Las Vegas Convention Center. There are usually three or four hotels, all filled with stuff. It’s a lot to take in. That said, it’s a lot of fun to see what the industry has come up within a year. Junko, tell us what you saw.

JUNKO YOSHIDA: Okay. I would like to focus on two things, actually. One is, you may not think this is cool, but one I think is the protection of privacy. Another thing is big data.

Let’s start with the protection of privacy, because I think everybody wants to talk about big data. And everybody thinks big data is a given. But nobody is really doing anything to protect privacy except for EU coming up with GDPR, right? So how do you make sure that your products are not inadvertently violating GDPR? Or, if you’re consumers, how do you protect yourself so that all that you’re saying into Alexa or all that you’re saying to Siri or all you’re interactions with smartphones and wearables. Your private data are not going straight to the cloud and being shopped around by the people who manage data. Right? So it was interesting that I came across a company called Declok. And the technology is a piece of hardware, actually, to de-identify yourself. This is the kind of technology that Silicon Valley startups would never think of doing it, because they think their business is collecting data. But this company, a small company in Taiwan, they think that this is going to be big in the era of GDPR. Everybody wants to protect their privacy. So why don’t we put this little chip in a dongle, stick it into the smartphone or, over the long term, this chip can go inside any embedded systems so that the technology will let the data aggregators to see the forest but not the tree.

BRIAN SANTO: And what does it do? Does it anomomize the data somehow before it gets sent?

JUNKO YOSHIDA: Yeah. They use a random number generator so that they can give the trend but they hide all the private information. Apparently someone from DARPA was interested in it and came by. And I said, well, I don’t know if it’s a good thing or a bad thing. They might want to reverse engineer this thing. Because if you’re in law enforcement, it’s also a kind of nightmare, right?

BRIAN SANTO: Okay, true. Yet in the US, we take it as a given, we assume we have certain privacy rights and protections. And those privacy protections and rights aren’t necessarily codified into law in other countries around the world. So this is certainly an issue around the world. But even in the US, it’s a bigger concern than most people might realize.

JUNKO YOSHIDA: Yeah. Big data. This also connects with big data, because as I walked around and talked to different people, I realized that everybody, without saying so much, but everybody’s struggling with big data, right? I cover automotive, and automotive uses a lot of different sensors, including computer vision, lidars and radars, and especially lidars. Oh my God! That point class that it generates, the amount of data is so huge. So the question is, if you have to deal with so much big data within the embedded system, how do you deal with it? You can put the big CPU or GPU inside the machine to process the data, or you would say that, will you add AI engines so that it filters out. But the truth is, you really need to figure a way to extract the information that it needs before it goes to, say, video compression or sensor fusion.

And there’s a little company in Berlin called Teraki. They’re showing this technology. That was kind of interesting. Because Teraki’s piece of software can run on safety microcontrollers like Infineon’s Orex or NXP’s Bluebox. I thought that was pretty cool.

BRIAN SANTO: That is cool. So it’s doing some preprocessing. It’s pulling information out of the data before sending it along?

JUNKO YOSHIDA: Exactly. So it’s a piece of software that works at the very edge, but it has the ability to filter out all the noise and information that it needs for later AI training or sensor fusion or whatnot.

BRIAN SANTO: Fascinating. So, Nitin, you were exploring all over CES, and you found a boat?

NITIN DAHAD: I certainly did. That’s one of my themes, autonomous is not just for cars, as we’ve been talking about over the last few years. There’s a company called Brunswick, which pressed me quite hard to go to their press conference, I guess to talk about the first-ever boat unveiling at CES. When you look at it, actually, it’s an electric boat– luxury, 40-foot electric boat, with obviously backup motors and everything. But it can run for eight hours with all the stuff that you use on there as well. And it’s got all the usual stuff that you’d see in the cars: the lidar, the sensors, the communication systems. So what I think the CEO of Brunswick was trying to impress on people at their briefing was, it’s really almost the same as your cars, and that’s why we’re at CES. We want to show off our boat.

Obviously there was other stuff as well in terms of autonomous. Autonomous features: there was an e-bike which had blind spot assist, which may not sound new, but they’ve already got a production model. It’s an Indian startup, and I think they’re crowd funding at the moment, if I’m right. Basically when a truck or a car or even a cow comes up beside, I guess this thing buzzes on your handlebar to say there’s something on the side.

And then there was the autonomous cannabis grower. Actually was they had in the booth was growing cannabis, but it’s actually any herbs or any plants. And it’s using a raspberry pie and Arduino camera module, some environmental sensors, and they’ve got some training modules. And they said it takes, in this cabinet, three months to grow cannabis and probably anything else. I don’t know how long it takes to grow cannabis, but it is quite interesting in the way they’re using the basic computing, the sensors and the AI. And that’s kind of a theme that I was seeing everywhere, actually. You use very simple stuff. And this company, by the way, was called Altifarm. Again, it’s a startup from India. I just happened on it because I was at Eureka Park, happened to go into…

BRIAN SANTO: Yeah, just happened.

NITIN DAHAD: And then the other thing I suppose was, is this really the best use of tech? When I was walking past the Amazon booth or space, whatever you call it, they had a Lamborghini, which they said was the first car with Alexa control built in. Do you really need it? I’m not sure.

JUNKO YOSHIDA: Well, there’s a lot of chip of companies like Qualcomm, they actually demoed the Alexa in a car last year. So I think it’s going to spread. It’s a hand-free control, right? It’s going to make sense.

BRIAN SANTO: Yeah. You’re in your Lamborghini tooling around at a hundred kilometers per hour in Monaco. That’s when you need to ask Alexa to find you a restaurant that serves the best Portuguese backla. Because you don’t want to take your eyes off the road because you’re screaming around corners in your sleek performance machine, right?

JUNKO YOSHIDA: Right.

NITIN DAHAD: The reason I sort of picked it up is because I had also interviewed a Formula One driver from McLaren, Lando Norris, and you’ll see the video online. But one of the things he said is, he wants that control to be able to sort of steer the car, move it around, whatever. So I’m not sure where the Alexa features come in. But I’m sure there’s got to be some real uses for it. It’s just that it seemed quite extravagant to have Alexa in a Lamborghini.

BRIAN SANTO: Well, speaking of extravagance and Alexa, didn’t one of the two of you see a bed equipped with Alexa? With a smart bed?

NITIN DAHAD: Yes.

JUNKO YOSHIDA: That was Nitin. Yes. Right?

NITIN DAHAD: So again, in the same space where they had the Lamborghini, Amazon was also showing smart beds.

BRIAN SANTO: Why?!?

NITIN DAHAD: You know what? I didn’t dig into it. Why would you want somebody snooping in whatever you’re doing in the bedroom? I’m not sure.

BRIAN SANTO: And supposedly there are sensors in the bed? They’ll record your sleep metrics like how much you toss and turn? That sort of thing?

NITIN DAHAD: Yes. Actually, when I walked through one of the booths, in The Sands, I think it was, and they had the whole smart home section, which actually was quite big. And I walked past a smart bed, and out of curiosity I just said to the person demoing it, I said, Okay, so if I lie on there now, will it measure my vital signs? And she said, No. It’ll do the processing and then it might give it to you in the morning. So I wasn’t sure what the use of that was. And that was my point of saying, Is this really the best use of tech? Or what’s the point of it? But I guess it’s got to evolve.

BRIAN SANTO: How many things that don’t require electricity now are we going to end up plugging in?

JUNKO YOSHIDA: I know.

BRIAN SANTO: So after the past few years, we’ve just seen company after company introduce things at CES for improving sleep or improving the quality of sleep. And they all cite statistics that sleep deficits are very common. And they’ve introduced headbands and helmets and nose plugs and all sorts of other devices aimed at helping you sleep better. Are you guys having that much trouble sleeping?

NITIN DAHAD: I do. But I don’t think I’ll be plugging into something to say, Put me to sleep. Although, actually going on the plane, I managed to find a piece of music that did actually help me sleep.

BRIAN SANTO: So maybe we’ve got an assignment for you next month, Nitin.

So we’ve all seen espresso makers, the ones that work with the pods. And those made sense, because if you want to make an espresso, you need a special machine anyway. And it’s a big process. But this year, one of those companies introduced a cocktail mixer.

JUNKO YOSHIDA: Ooooo! Really?

BRIAN SANTO: And as much as I was happy to get a cocktail– because I really needed a Moscow Mule at 4PM on the floor of CES that day. How hard is it to pour a cocktail that you need another machine to do it for you?

NITIN DAHAD: I actually walked past that and I avoided it.

BRIAN SANTO: The other thing I walked past and avoided completely were the smart toilets. I mean, I don’t want to plug in my toilet. And frankly, I don’t want it to be that intelligent either!

NITIN DAHAD: Well, in Japan, when I went to Tokyo a few years ago, I did see some what you would call “smart toilets” maybe.

JUNKO YOSHIDA: It’s the issue of cleanliness. And the Japanese are meticulous about how clean it must be. So it’s not just about toilets, but how you clean yourself. Everything to do with personal hygiene. The Japanese are really meticulous about that.

NITIN DAHAD: I didn’t see the one in CES, but I’m guessing if they’re smart toilets, they’ll probably analyze your excretions to see sort of how well you’re doing.

BRIAN SANTO: I saw something like that last year. It had a camera, and it was AI-based, and it looked inside baby diapers and analyzed what had come out so you can presumably make better decisions about what goes in.`

JUNKO YOSHIDA: I think what it really comes down to is, just because you can, doesn’t mean it needs to have that technology.

BRIAN SANTO: Exactly.

JUNKO YOSHIDA: Yeah. That’s really too prevalent in CES. It’s just a little too much.

BRIAN SANTO: Yeah, I don’t know that I need to plug in a toothbrush just so that it can give me teeth brushing metrics.

NITIN DAHAD: I did actually try that. I went to the Proctor & Gamble booth and actually tried it. And for me, it was kind of a revelation, because the week before I actually did go to the dentist, and he told me off for not brushing properly. So what this actually did was actually allowed me to figure out where I wasn’t brushing right. So that map on the phone where I was and wasn’t. Because it’s using accelerometers and position sensors to actually determine where in the mouth it is relative to your start point. So then that’s how it measures where it’s going. Would I spend $200+ on that just to tell me if I was doing it right? Or should I go to the dentist and get him to tell me off? I don’t know.

BRIAN SANTO: Well, yeah, you’ve convinced me.

JUNKO YOSHIDA: Sold!

BRIAN SANTO: For the right price, maybe an electric toothbrush is worth it. So any other observations from your peregrinations around CES?

JUNKO YOSHIDA: You know, Brian, you should talk about what you found, actually, because it sounds essentially there’s a whole bunch of startups doing some interesting stuff. But a lot of times, do we need these technologies all crammed into The Sands? No offense to anybody. They are creative, but at the same time, it’s too much navel gazing going on. Too much information in my opinion.

But there were all sorts of surprisingly non-consumerish technologies on display at CES, right, Brian?

BRIAN SANTO: Oh, yeah. I was amazed by this. Really great to see, but there’s no way that the IBM Quantum computer can in any way, shape or form be considered consumer electronics. That said, for whatever reason it was at CES, it was cool to see IBM’s Q Quantum computer in the grand lobby there.

JUNKO YOSHIDA: That central hall.

BRIAN SANTO: It looks like something out of a sci-fi movie. I mean, it’s in this glass-enclosed case…

JUNKO YOSHIDA: Beautiful.

BRIAN SANTO: It’s beautiful! It’s cryogenically cooled; it’s vacuum sealed; it’s this gold, gleaming, tiered mechanism with pipes and connections, and they’re all symmetrically shaped and placed. Yeah, it’s just this wicked cool looking device. What IBM’s Quantum computer, Q, does, there are different ways to do quantum computing, but IBM’s Q takes a particular approach where they look at electron spin. The electron is the quanta, right? And the way it was explained to me is, you think of the electron as a globe, and you assign values to points on the globe. So it’s spinning on its axis. And the top axis, you assign that as a value of one, and the bottom axis, you assign the value of zero. And every other point around the globe represents some other value. And you’ve got like this almost infinite number of values between one and zero. So you physically add these electrons into the queue. And each electron is considered a q-bit, a quantum bit, a q-bit. And you cool them down progressively from physical layer to physical layer, from top to bottom. So the top layer in the machine is cooled to maybe a few Kelvin. And as you drop from tier to tier to tier, you’re cooling it more and more, to mili-Kelvins. And at the end, what happens is that the electrons are active and they get less active as they get cooled down. And they physically collapse as they move down through these tiers. And they collapse toward specific values. And IBM and the people, researchers, that they’ve been working with, they’ve developed these sophisticated algorithms that predict the behaviors of these electrons as they drop through the layers well enough to exploit the physical process as a kind of a model that they can model the algorithms on. Frankly, the mathematics are way beyond me. But the upshot is that you can do these amazing things with the IBM Q computer.

The example that they’ve been using for a couple of years is a caffeine molecule. Now, on a scale of molecules, caffeine isn’t all that complex. But modeling a caffeine molecule is way more sophisticated than most regular supercomputers can do quickly. And that’s what a quantum computer can do. It can do these amazingly sophisticated, highly complex calculations pretty rapidly.

Now that was known. But what was new was that IBM is building a quantum computing firm up in Poughkeepsie. They’ve got 15 of these machines there so far. And they’re providing access to them, as they’ve been providing access individually to each individual quantum computer. They’re still providing access. It’s sort of like quantum as a service. And one of the interesting things was, I asked the researcher there if you can gang these. And I’m not even sure why. And he wasn’t even sure why you might want to gang them up. But they’re exploring ways to make that possible. He was explaining there’s new research that allows them to take the quantum activity and switch it over to photonic so that you end up with optical computing to gang the 15 Qs or more as you go forward. It’s really fascinating stuff.

And then… Now for something completely different from a quantum computer. I wandered over to the John Deere booth.

JUNKO YOSHIDA: That’s a huge machine they have!

BRIAN SANTO: I know! It’s enormous!

JUNKO YOSHIDA: It’s a spray machine?

BRIAN SANTO: That’s it. So I’m mostly an urban kind of a guy. And walking into that John Deere booth, looking at all this farming equipment, I had to learn a whole new vocabulary. Like, What’s that big thing over there? And it’s like, Oh, that’s a combine.

JUNKO YOSHIDA: Let’s start from there, right?

BRIAN SANTO: Yeah. They had to train me from the basics just to talk to them about it. So what that was there was a sprayer. And a sprayer is an enormous machine roughly the size of a sanitation vehicle that picks up your garbage in cities. And it has on either side these two long booms. I’d guesstimate maybe 30 feet long. And set along these booms at intervals of like maybe eight or nine or ten inches were these spray nozzles. And those are for applying fertilizer or herbicide. So they’ve had sprayers that have had cameras on the booms for a while now, but what they’re developing now– and they think they’re about two years away with this thing– what they’re developing now is that they’ll have each nozzle with a camera, and each camera will have its own processor.

Now John Deere is becoming a tech company. They identify themselves as a technology company now. And they feel their strength is developing algorithms. So what’s coming is an AI-based sprayer. And the idea is, they’ll be able to drive these sprayers through the fields, and at the speed they’re going, they’ll be each boom, each camera will be hovering over any given plant for roughly about a third of a second. And in that time, this AI-based camera will determine whether it’s hovering over a soybean plant or a weed, or a corn stalk or a weed. And it’ll have that third of a second to decide which it is and whether or not to hit it with an herbicide.

Now the value of this is that they figure that they’ll be able to target herbicides or target fertilizers. With herbicides, they think that they might be able to cut down the chemical application. Instead of just spewing herbicides over the entire field, they might be able to cut their herbicide application down by 90%, just cut the chemical application by 90%. And with fertilizer, similarly, by being able to target the plants, they’ll be able to reduce the amount of chemicals overall that farmers have to use in the fields. Save money that way.

NITIN DAHAD: There’s one more thing about that, actually. Because it’s not just about their reduction of the chemicals. When you look at the bigger picture: growing population, world population, how do you feed them? From my travels, I’ve seen quite a lot on sort of figuring out precision agriculture. One of the startups I saw in Taiwan last year was doing exactly that. And they’ve got contracts in Europe to enable people to make it more efficient to make food, basically.

BRIAN SANTO: Right. Right. I sat in on a session about using technology to make people’s lives better. And the speakers included someone from the World Bank, which is investing in this sort of thing. Someone from the government of Bangladesh. Another person representing the government of Columbia. And the idea is to use emerging technologies like AI, like 5G, to improve people’s lives, to find out ways to do that. And it’s really inspiring.

JUNKO YOSHIDA: In theory, I guess.

NITIN DAHAD: Actually, no. There is practice now.

BRIAN SANTO: So the examples they gave in Columbia: People in Columbia are slapping up homes, in some cases, using whatever materials are at hand, and the problem is that many of these homes are substandard. And it’s questionable if they can stand up in an earthquake or stand up if there’s a landslide, or even stand up if there’s a hard rain. So the government’s goal is to spend money to help bring up the standards of some of these homes. But the question is, Which ones? They want to target their limited funds on homes that really do need to be brought up to par.

So what they’ve done is, they’re sending out vehicles, sort of like the vehicles you see that do the mapping for Google Maps, and it’s a similar idea. And what they do is, they take video, they take imagery of all of the homes that they pass by, and then use that to build a map of the communities that they’ve gone through. And then they apply AI to evaluate what each individual home, what the materials were used in building those, get a sense of what the condition of each home is. And by doing that, they can take those maps that they’ve made and figure out where within those communities they should be targeting their investment money. That’s just one example.

Bangladesh, for instance, they’re dealing with climate change. It’s a very exceptionally serious thing for Bangladesh, because if it’s not checked, if climate change is not checked relatively soon, roughly a third of the country could be underwater in a couple of decades. And as it is, natural disasters that used to come every 100 years or 500 years, these are coming like every year, every other year now. Floods, especially. So what they need is an early warning system. So they’re trying to use modern communication systems for communications. They’re trying to use AI for predicting where natural disasters might strike and how severe they might be. And that’s all for the purpose of setting up early warning systems.

So there are some really practical applications of all these high tech systems being employed right now, today.

NITIN DAHAD: The UK could possibly be underwater as well if it goes as well. Some coastal areas.

BRIAN SANTO: Yeah. New York City, Los Angeles, Miami.

NITIN DAHAD: You were talking about the homes and analyzing the materials to make sure that they’re safe. That actually brings us back to some of the technologies we talk about in EE Times, because I’m seeing people using things like sound and the resonant frequencies of certain materials to identify them. And radar, Vayyar Imaging is doing a lot of that where they’re getting the signatures of various things so that then they can do full detection, they can do lots of other things. I don’t have any interest in them. That just came to my mind.

It is about using all the different technologies we talk about in, say, things like autonomous vehicles, but they’re actually being used in lots of other areas as well.

BRIAN SANTO: I wrote a story a couple of years ago about the City of Pusan in South Korea. And they’re on the Sea of Japan. They’re at the confluence of two major rivers. They’re up against some hills. So they’ve got flash floods, potential for tidal waves, potential for landslides. So what they’ve done is, they’ve installed sensors all over the city so that they can get early warning for natural disasters. They can warn their citizens that whatever natural disaster might be occurring. It’s things like that. They are modern applications of high technology. It’s incredibly useful, helpful, can actually save lives, and it’s invigorating and exciting to be able to report on all of these technological advances.

I encourage you to check out our reporting from CES, which includes articles from the entire EE Times staff, some great photojournalism from David Benjamin, and several podcasts. Peruse the web site at eetimes.com, or take a look at the handy-dandy list of links we have on our web page dedicated to this podcast.

Welcome, everyone, to the year 2020. And now let’s leave it. It’s time to enter the Wayback Machine to revisit some of the great moments in electronics history.

On January 14th in 1914, that day saw the first product roll off the first industrial assembly line. It was, of course, a Model T automobile produced by the Ford Motor Company. The assembly line was one of the great innovations of the industrial age. A hundred years later, it’s one of the fundamental enablers of producing everything from computer chips to Pringle’s potato chips – which I deliberately mentioned because I only just recently found out one of those odd pieces of trivia that I live for. One of the people who helped create the machine used to create Pringle’s was author Gene Wolfe, perhaps best known for his four-volume “Book of the New Sun” series.

And okay, we absolutely have to include this one. On January 11th in 2001, Dave Winer, then the CEO of Userland Software, was the first to demonstrate a specific tag for RSS feeds that would pass the URL address of a media file to an RSS aggregator. He called the tag “Enclosure.” He created it at the request of former MTV VJ Adam Curry, who was experimenting with what – at the time – was called audioblogging. Audioblogging is now commonly referred to as podcasting. And here we are.

This month, we’re also celebrating the birthday of HAL the computer, which in the film “2001” reported that it became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. His instructor was Mr. Langley, and Langley taught HAL to sing a song. If you’d like to hear it, you’ll have to watch the movie, because we don’t have the rights.

Finally, on January 13th in 1910, the first public radio broadcast was transmitted. The engineer behind it was Lee de Forest, whose Radio Telephone Company set up the transmission live from the Metropolitan Opera House in New York City. It was a performance of an opera called Cavalleria Rusticana, and it featured a tenor by the name of Enrico Caruso. The sound quality was said to have been miserable, but the broadcast radius was several hundred miles, reaching into Connecticut and heard by ships far out at sea. Here’s a separate recording of Caruso, captured also in 1910, singing an aria from that opera.

(ENRICO CARUSO)

BRIAN SANTO: That’s your Weekly Briefing for the week ending January 16th.

This podcast is Produced by AspenCore Studio and was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss.

The transcript of this podcast can be found on EETimes.com. You can find a new episode every Friday on our web site, or via your favorite app for podcasts. I’m Brian Santo. See you next week.

Ad block