Ad block

Texture Sensations on the First Haptic Smartphone Display

Bringing textures to life by changing the sensation of a surface as your finger slides over plastics, wood, and glass is the ambition of Hap2U, a France-based haptic technology startup.

At this year’s Consumer Electronics Show (CES), Hap2U is demonstrating what it claims is the world’s first haptic smartphone display, which allows users to feel and sense objects on touchscreens. Its Hap2phone technology was named Honoree of the 2020 CES Innovation Award.

Interactive and tangible
With the Hap2phone, Hap2U is targeting a global haptic component market that will be worth $4.8 billion by 2030, according to a recent report by IDTechEx. Haptic technologies have been used in products such as game console controllers for more than 30 years and can be found in the vast majority of smartphones, smartwatches, and electronic devices. Over the past five years, however, the research firm said it has observed a shift in the core haptic technology and “an even more significant shift in the direction of innovation efforts to develop the haptic technologies of the future.”

Founded in 2015, Hap2U has developed the so-called ultrasonic lubrication principle. As explained in a company blog post, the vibration occurs at ultrasonic frequencies (above 20 kHz) and generates a thin film of pressurized air between the finger and the screen, thus modifying the friction. The vibration can be controlled.

With a smartphone, users can sense vibrations when he or she receives a notification, receives a text message, or clicks on the glass surface. This is the vibrotactile technology. Hap2U’s technology aims to enhance the overall emotional experience by enabling texture sensations. In a statement, Chappaz explained: “Think about users on their smartphone in noisy or harsh-lighting conditions — outdoors, for example — how touch then becomes a major feature to improve their experience. HD texture sensation is a crucial interface between the user and the outside world, introducing an added level of interaction compared to traditional screens.”

Hap2U uses piezoelectric actuators to generate ultrasonic vibrations on a glass screen and modify the friction of the user’s finger. The vibration is synchronized with the position of the finger, enabling the user to feel what appears on the screen. This thin-film piezoelectric solution (2 microns) has a minimal impact on weight (<1g) and on the display power consumption (1%).

By applying a friction coefficient, Hap2U claims it can make distinct variations in touch sensations — intense or soft nicks, springs, buttons, elasticity, and all kinds of high-to-low elevation points and textures. This allows the nerve endings in the fingertips to detect different sensations and informs the brain to interpret them.

Basically, Chappaz stated, “Hap2Phone offers the physical touch experience of what users see: If there is a fish on the screen, the user feels its scales; same for a pushbutton, a slider, [or] the wheel of a car in a video game.” For manufacturers integrating screens in their products, this solves the problems related to the digitization of objects by making them both interactive and tangible.

Glass, but not only
Initially focused on glass surfaces such as smartphone and tablet screens, Hap2U said it has been working on a multi-material haptic technology and is now deploying haptics on wood, plastic, and metal.

Hap2U’s technology is not solely intended for smartphones and could find applications in the IoT, industrial, automotive, and smart building markets.

After initial seed funding in 2016, Hap2U completed in late 2018 a €4 million Series A funding round with Daimler AG to accelerate the development of its haptic technology. Headquartered in Grenoble, Hap2U now employs 30 people and expects to double to 60 by the end of 2020.

Read also: Piezo Haptic Feedback to Enhance Drivers Safety

AI Can Map the World for Disaster Preparedness

By Sally Ward-Foxton

LAS VEGAS — Intel has developed AI models to identify geographical features from satellite imagery for the creation of accurate, up-to-date maps. The company has been working closely with the Red Cross on its Missing Maps project, which aims to create maps for areas of the developing world to improve disaster preparedness. Many areas of the developing world do not have up-to-date maps, which means that aid organizations can struggle to work efficiently in the event of natural disasters or epidemics.

“As someone who’s been on the ground with the Red Cross, having access to accurate maps is extremely important in disaster planning and emergency response,” said Dale Kunce, co-founder of Missing Maps and CEO of American Red Cross Cascades Region. “But there are entire parts of the world that are unmapped, which makes planning and responding to disasters much more difficult. This is why we’re collaborating with Intel to use AI to map vulnerable areas and identify roads, bridges, buildings, and cities.”

“If you don’t know where all the roads are before a hurricane hits, after it hits, you have no idea where flooding has occurred or which roads are washed out and which aren’t,” said Alexei Bastidas, deep-learning data scientist at Intel AI Lab, in an Intel podcast on the subject. “If you don’t have an accurate enough map of what was there beforehand, it really prevents you from responding to the disaster as it’s ongoing. The other thing to consider is that a lot of these disasters … are weather events — cyclones, typhoons, hurricanes, even volcanic eruptions. These weather events can occlude the satellite sensor; they create clouds … It makes it extremely challenging for somebody like the Red Cross to respond to an event.”

At present, Missing Maps uses a team of volunteers to go though satellite images and identify roads, towns, bridges, and other infrastructure. The volunteers manually update an open-source map called Open Street Map, which is laborious and time-consuming.

Intel’s AI Lab, in collaboration with Mila and CrowdAI, developed an image-segmentation model and used it to identify unmapped bridges in Uganda from satellite pictures. Object-detection approaches were discounted due to performance in favor of segmentation. Bridges were selected as a trial feature because they are critical infrastructure and are particularly vulnerable to natural disasters such as floods. Seventy previously unmapped bridges were discovered by the system; the Ugandan National Society can use this data to better plan evacuation and aid-delivery routes.

Uganda Map
The system identified 70 bridges across Uganda that were previously unmapped by either Open Street Map or the Ugandan Bureau of Statistics. (Image: Intel)

Satellite imagery can be particularly challenging to work with. The lack of an obvious frame of reference for up and down is challenging, said Bastidas. Also, images are not always taken from directly above, meaning the same feature may be seen from different angles. Differences in the local terrain as well as styles of infrastructure and architecture make it hard to train models on labelled data from other parts of the world. Even in images from the same country, terrain may look very different in summer and winter, and features such as bridges show huge variation in size and style.

Intel’s training dataset therefore had to come exclusively from Uganda. In fact, a section of Northern Uganda was used, which includes multiple views of the same bridges to enable models to learn about seasonal and nadir-angle changes.

The models started by looking for waterways and highway features, and any areas where a highway crossed a waterway was marked as a candidate point for a bridge. Known bridge locations within 30 m of any candidate points were discarded. Bounding boxes were added around these intersections, and then satellite images from areas in the bounding boxes were pulled. The models could then interpret the images to see whether they contained a bridge.

The models ran on second-generation Intel Xeon scalable processors (Cascade Lake) with DL Boost and nGraph. Bastidas said that these processors were chosen for their giant size; satellite images are often 1,024 square pixels, and it was desirable for the chip to process an entire image at once.

According to Bastidas, the next steps for the project may include the generation of models that can aid human mapping volunteers, perhaps predicting bridge locations but leaving the final decision to human eyes.

“We are also interested in trying to come up with ways to leverage existing open-source data to make models that are more robust, more generalizable, and can [work] with more tolerance for this geographically distinct area,” he said.

FCC’s Pai Favors Sharing Spectrum Pie

By David Benjamin

LAS VEGAS — With a bitter controversy over his successful efforts to undercut the Obama administration’s net-neutrality policy well in the past, the Federal Communications Commission (FCC) chairman made his first appearance at the Consumer Electronics Show (CES) as a kinder, gentler Ajit Pai.

Pai joked with CTA president Gary Shapiro about the Doomsday scenarios forecast by net-neutrality advocates — “I get messages that I destroyed the internet over the internet” — and focused on less prickly issues, including the implementation of 5G mobile networks and the still-unrealized FCC mission of “broadband for all.”

Indeed, with 5G on the horizon and many rural communities suffering from slow broadband, or none at all, Pai touched upon the various elements, including the availability of spectrum to giant wireless providers like Verizon and smaller fixed wireless broadband providers, the latter of whom are starving for spectrum.

“Our goal is to remove spectrum as a constraint on innovation,” said Pai. He added that there now exists “a wide variety of use cases to share spectrum assets in ways that benefit consumers … to make sure that this resource is deployed to the benefit of the American people.”

Indeed, there are proposals in the works, favored by the Commerce Department’s National Telecommunications and Information Administration’s Office of Spectrum Management, in collaboration with the private Institute for Telecommunications Sciences, that would allow sharing, among smaller network providers, of mid-range C-Band (3.5 GHz) and Citizens Band Radio Service (3.7–4.2 GHz).

While interviewing FCC chairman Ajit Pai, CTA chief Gary Shapiro made an inadvertent political prediction, saying about Pai’s boss, “It could be President Trump again.”

Pai was emphatic about the need to deliver more spectrum to rural communities who are still missing out on the earlier generations of connectivity — LTE and 4G. “We’ve got to think about more sharing models,” he said. Citing something called “massive MIMO” (multiple input/multiple output), Pai added that the days of “zero-sum spectrum, if you’re using it, I can’t use it” must end.

However, the historical precedent is that the big networks are first to the trough. The question unasked by Shapiro was whether a libertarian FCC leadership under Pai that has been loath to interfere in the “free market” is well-suited to helping out some 200,000 “small cell” providers.

Asked by Shapiro about the deployment of 5G, Pai admitted that a host of obstacles stand in the way of getting it installed everywhere equally in fast order. The first problem, he said, is a trained workforce large enough to string fiber on a vast forest of utility poles across the continent. “It’s hard work, outdoors, in all kinds of weather.”

He cited other practicalities such as the availability of materials like utility poles and copper. He noted the need to be more aggressive about rural broadband and to “get electric companies far more involved.”

Additionally, noted Shapiro, there are some communities, states, and cities that object to the intrusion of 5G “aesthetically.” In this case, Pai came firmly against “too many layers of government getting in the way.” In this case, he suggested, the feds should rule.

“The more disparate these regulations are, the fewer companies will be able to get into the market,” he warned.

Chairman Pai, who sported a pair of colorful “sushi socks” with his loafers, brought a certain measure of suspense to his appearance at CES. Until his chairmanship, a chat between Shapiro and the FCC chairman was an annual ritual. But after threats were issued against Pai during the net-neutrality battle, he begged off the trip to Las Vegas. He agreed to attend the session after only a two-year hiatus. Even this year, the hundreds of convention-goers who arrived at the Las Vegas Convention Center meeting room were subject to stop-and-frisk bag searches and pat-downs before being allowed into the hall.

Shapiro began the session by saying, “He said he was coming a couple of times and he didn’t show up.” But then, smiling, Pai broke the tension and strode onto the stage.

T&M Solutions for Automotive Visions into Reality

By Maurizio Di Paolo Emilio

LAS VEGAS — Keysight is supporting the automotive industry through their latest innovations shown at the Consumer Electronics Show 2020. The technological transformation with the advent of IoT, 5G, and vehicle-to-everything (V2X) communications leads to several challenges that require the implementation of sophisticated test and measurement solutions to maintain automotive safety in the era of autonomous driving.

Next-generation vehicles need to develop different applications in multiple areas such as infotainment, telematics, driver assistance, and autonomous driving with maximum reliability, safety, and privacy.

V2X can be used in many different ways to improve road safety while leveraging the existing smart traffic infrastructure. 5G is a significant challenge for all players in the wireless market. It will take time for it to be fully deployed worldwide and will not be delivered in a single major release — with a significant network deployment program planned for 2020 and beyond.

Engineers are working on autonomous driving technologies to make our roads much safer. Their “weapons of choice” are accurate test and measurement solutions — essential tools to make sure their designs work perfectly. The technology that will enable vehicles to make the leap to standalone driving and V2X platforms (implementing solutions to leverage big data) will improve the driver experience and the transport system as a whole. The standard will also support next-generation infotainment systems, with over-the-air updates and multimedia downloads.

Keysight provides automotive designers and manufacturers with the latest innovations in design and test solutions to help create high-quality, high-performance products while mitigating safety risks with comprehensive solutions for e-mobility charging and interoperability testing, inverter efficiency, radar sensor technology, and safety.

“At CES, we are seeing automotive innovation orders of magnitude smaller than just a few years ago — smaller, more capable electronics with higher levels of integration,” said Jeff Harris, vice president of global marketing at Keysight. “At Keysight, we are excited to see how fast our customers innovate once they have the right design and test capabilities.”

The widespread application of technologies in the automotive market is proliferating. In addition to sensor technology, connection technologies have also seen developments. It all started with elite vehicles, but more and more trucks and utility cars are now using radar sensors, for example, mainly for increased safety and convenience. The solution offered by Keysight Automotive Radar Research and Development offers full coverage at 77/79 GHz, as well as analysis and signal generation over 5 GHz and excellent displayed average noise level (DANL) performance in the industry.

The aim will be first and foremost to have a valid connection between vehicle and vehicle and between vehicle and infrastructure. 5G is ready for this revolution in order to meet the growing challenges required by the market. The 5G Automotive Association (5GAA) consortium is working to define the standards that will govern this revolution.

Keysight is partnering with many leading global wireless regulators, leading companies, and universities to enable the next generation of wireless communication systems. Many solutions allow you to test security systems and various traffic scenarios by correctly testing the vehicle according to the type of driving, the person, and the operating environment. Keysight’s 5G network emulation solutions enable the device ecosystem to simplify workflows, share information, and accelerate time to market.

The data rate of in-vehicle systems continues to increase, and the signal integrity of interconnections dramatically affects system performance. Analyses of interconnection performance in both time and frequency domains are therefore critical to ensure a reliable system.

Airline Safety Gets Short Shrift at CES

By George Leopold

Conspicuously absent from a CES presentation on the future of aviation technology was the burning question of airline safety in the aftermath of two deadly Boeing 737 Max crashes.

Conspicuously absent also during a keynote address by Delta Air Lines CEO Ed Bastian was any reference to Boeing. Bastian didn’t have to mention the embattled aircraft manufacturer since his fleet includes no 737 Max aircraft grounded in the aftermath of the second 737 Max crash in March 2019 that killed 157 passengers and crew.

Instead, Bastian touted the range, gadgets and other comforts of Delta’s new fleet of the Airbus 330-900neo aircraft, the very plane that prompted Boeing to rush the production and rollout of its flawed 737 Max.

Delta is also the North American launch partner for the Airbus 220, a narrow-body passenger jet designed to feel like a wide-body jet—which presumably means you won’t bang your head on the overhead bins when boarding a redesigned version of a regional jet.

Delta, a forward-looking carrier with a good on-time record, picked the right horse in the race to deliver longer range aircraft that burn less aviation fuel, among an airline’s biggest costs. Delta is listed among the Airbus’ top customers for the A330neo.

Delta CEO Ed Bastian

While there were passing references to delivering passengers “quickly and safely” to their destination—the Delta CEO certainly did not intend to downplay aviation safety—the focus on flying as another customer experience is a bit strained. Not surprising, we suppose, during a consumer electronics show.

Sure, free in-flight Wi-Fi, biometric screening, “parallel reality” display screens, pet care “pods” and the rest of Delta’s “applied innovation” are nice upgrades. So too is the fact that the airline has mostly resisted the economic imperative of cramming as many passengers as possible into its planes.

But the anxious flying public is equally interested in safety and reliability in the aftermath of the 737 Max crashes as they are in on-time arrivals and immersive flight experiences.

Bastian could have devoted at least a few sentences to the subject of airline safety in his roughly 90-minute keynote.

A few readers contend we are bashing Boeing, piling on when a great engineering company is fighting to save its once-sterling reputation. But we are unwilling to compromise on safety.

So, too, should Boeing.

AMD Targets Top End Content and Gaming With New Mobile and Desktop Processors

By Nitin Dahad

LAS VEGAS — AMD’s CEO, Lisa Su, was ebullient at CES 2020 as she announced what she said was the world’s highest-performance desktop processor and ultrathin laptop processors. The latest mobile processor family, the AMD Ryzen 4000 series, features 59% higher performance than its previous generation, and its desktop processor, the AMD Ryzen Threadripper 3990X, is the first using 64 cores.

The new AMD Ryzen 4000 Series is the first x86 eight-core ultrathin laptop mobile processor family, built on the Zen 2 core architecture with 7-nm process technology and optimized high-performance Radeon graphics in a system-on-chip (SoC) design. As the third generation of AMD Ryzen mobile processors, it provides significant performance improvements, design enhancements, and power efficiency for ultrathin and gaming laptops. AMD also announced the AMD Athlon 3000 Series mobile processor family powered by its Zen architecture, enabling modern computing experiences with real performance for a wider range of laptop users.

Consumers will be able to purchase the first AMD Ryzen 4000 Series- and Athlon 3000 Series-powered laptops from Acer, Asus, Dell, HP, Lenovo, and others starting in Q1 2020, with more systems expected to launch throughout 2020 with global OEM partners.

For high-end desktops, AMD also announced the 64-core, 128-thread AMD Ryzen Threadripper 3990X, which will be available globally from Feb. 7, 2020. Purpose-built to enable extreme performance for 3D, visual effects, and video professionals, the 3990X delivers up to 51% fast

AMD Ryzen

er rendering performance than the AMD Ryzen Threadripper 3970X processor.

Su emphasized that gamers and creators helped AMD push the envelope for more performance in both laptops and desktops, as they always want more out of their system. In that respect, she said that 2020 was going to be an even bigger year in terms of being able to deliver the best to gamers and creators. “We are introducing the best laptop processor ever built. This is disruptive performance, since we wanted to be above the historical curve in terms of performance improvement.”

She added that the new Ryzen 4000 series was twice as power-efficient compared to their previous generation as a result of the gains from the 7-nm process as well as design and architecture work.

Featuring up to eight cores and 16 threads, the AMD Ryzen 4000 U-Series mobile processors provide “incredible” responsiveness and portability, delivering disruptive performance for ultrathin laptops with a configurable 15-W thermal design power (TDP). Additionally, AMD said th

at with more than 90 million laptop gamers and creators, the AMD Ryzen 4000 H-Series mobile processors will set the new standard for gaming and content creation with innovative, thin, and light laptops with a configurable 45-W TDP.

The new AMD Ryzen 7 4800U offers up to 4% greater single-thread performance and up to 90% faster multi-threaded performance than the competition, plus up to 18% faster graphics performance (benchmarked against an Intel Ice Lake processor). The H version, AMD Ryzen 7 480

0H, provides up to 5% greater single-threaded and up to 46% greater multi-threaded performance than the competition, plus up to 25% faster 4K video encoding using Adobe Premier than the competition (again compared against an Intel Ice Lake processor).

Offloading the processor for even better performance
In addition, AMD detailed its SmartShift technology, which allows users to harness Ryzen 4000 mobile processors, Radeon graphics, and its latest AMD Radeon software Adrenalin 2020 edition, to advance computing experiences by efficiently optimizing performance as needed and taking gaming experiences to “new levels.” It does so by dynamically shifting power between the Ryzen processor and Radeon graphics, which it claims can seamlessly deliver up to 10% greater gaming performance and up to 12% more content-creation performance.

AMD Threadripper

64-core desktop processor
AMD also launched the AMD Ryzen Threadripper 3990X, its first 64-core desktop processor. Creators will be able to buy the processor from participating global retailers and system integrators, with on-shelf availability expected Feb. 7, 2020.

It features an “unprecedented” amount of single-socket compute performance in a desktop platform, which AMD said will make processor the definitive solution for digital content creation professionals working with 3D animation, raytraced VFX, and 8K video codecs. It can deliver up to 51% greater performance than the Ryzen Threadripper 3970X in 3D ray tracing with the MAXON Cinema4D Renderer and a historic Cinebench R20.06 score of 25,399 points for a single processor.

Toyota Touts AI-Driven Dream City

By David Benjamin

LAS VEGAS — In announcing plans by Toyota Motor Corporation to build a new city in Japan, fueled entirely by renewable resources and operated by an intricate web of artificial intelligence (AI), company chairman Akio Toyoda told a first-day media audience at the Consumer Electronics Show (CES), “You must be thinking, ‘Has this guy has lost his mind?’

“Are we looking at a Japanese version of Willy Wonka?”

Standing small against a towering screen that showed artist’s renderings of a futuristic city of 2,000 people, nestled in the shadow of Mount Fuji, Toyoda might have indeed resembled Willy Wonka, cloistered from the world in his shuttered chocolate factory. But what the Toyota chief did not say was that, in the movie, Willy Wonka was always a step ahead of everyone else.

By leapfrogging Toyota’s focus on AI into smart-city concepts, Toyoda was tacitly confirming the widespread consensus that AI in vehicles — the dream of a fully autonomous family sedan in the immediate future — is stuck in neutral.

Indeed, almost completely absent from Toyoda’s presentation was any mention of automobiles. Toyota concept cars and the Toyota Research Institute — the company’s pride and joy at CES in previous years — were off the agenda.

And the Woven City is going to be th-i-s big. (Image: EE Times)

Mounting safety concerns and technology issues have led to an industry-wide retreat from full autonomy. Instead, carmakers and their technology allies are touting refinements in advanced driver-assistance systems (ADAS) and the development of safety standards that might make autonomous vehicles (AVs) acceptable to a dubious public afraid of seeing their kids run over by Robbie the Robot.

Meanwhile, suggested chairman Toyoda on Monday, why not build George Jetson’s condo?

Toyoda introduced Bjarke Ingels, founder of the Bjarke Ingels Group (BIG), a Copenhagen-based architectural enterprise chosen to partner with Toyota in the creation of a future urban model called a “Woven City” by Toyoda and Ingels. Ingels explained that three forms of mobility would interweave in this city.

Ingels said that the Woven City would have some traditional streets, where pedestrians, bicyclists, cars, and other vehicles share space. But the dominant avenues would be an urban promenade, wide and exclusive to pedestrians, and a “linear park” designed for walking, strolling, picnicking, and — Ingels suggested — a greater level of social interaction than is common in most cities today.

Taking over from Bjarke, Toyoda sketched a vision of high-rise “blocks” surrounded by greenery, each roofed with photovoltaic tiles to convert sunlight to energy. Underground, said Toyoda, there will be a hydrogen power plant to provide additional energy. Toyota, without getting much traction on the idea, has been a leader in promoting hydrogen as a fuel source.

Each unit in the Woven City’s residences, of course, would feature lots of robotics, “with people, buildings, and vehicles all connected. “The key indoor technology would be sensor-based AI that intuits the household’s needs before any human notices, stocking the fridge, adjusting the heat, collecting the trash, monitoring the baby, housebreaking the puppy, and polishing the doorknobs.”

And every unit, he added, “will have a spectacular view of Mount Fuji.”

The streets, said Toyoda, will be filled with automated Toyota vehicles delivering goods; shuttling babies, nannies, and senior citizens; sweeping the streets; and facilitating more human contact — which, said Toyoda, must be the objective of AI in its practical applications.

However, anyone who hears about Akio Toyoda’s “personal field of dreams” and starts hoarding real estate in Kanagawa and Yamanashi prefectures near Mount Fuji will be jumping the gun. All these Woven City plans, cautioned Toyoda, will be worked out first in virtual reality.

He said Toyota is inviting like-minded companies and individuals “and anyone inspired to improve the way we live in the future” to help test the possibilities of the Woven City “in both the virtual and physical worlds.”

The Toyota chairman bowed out without providing a timetable for his dream.

Arduino Portenta for IoT Development

By Maurizio Di Paolo Emilio

At the cost of $99.99 for its elite version, the new Arduino Portenta H7 was announced at CES. The new board is the first solution in a series for industrial IoT. At its heart is the STMicroelectronics STM32H747 microcontroller, with a dual-core Cortex-M7 and Cortex-M4 on the chip, operating at 480 MHz and 240 MHz, respectively, and a temperature range of –40°C to 85°C.

Laurent Hanus, ecosystem marketing manager at STMicroelectronics, said that Arduino Portenta H7 reflects the exceptional performance of the STM32H747, also offering the usability of the new platform for cloud applications.

Arduino Uno arrived in 2005. The technology par excellence in Italy has become one of the pillars of the maker movement. Many things have changed in recent years. The collapse of hardware prices and the arrival of boards that run MicroPython and JavaScript have changed the ecosystem of open hardware in a profound way. The form factor inherited from Arduino Uno is still around and will surely remain in the minds of developers, but the newer Arduino boards use the more modern MKR form factor.

The Arduino MKR family was born for engineers and makers to offer an extremely fast time to market for the industrial market. What sets the MKR boards apart from the others in the Arduino family is, in addition to the family form factor of 67.64 × 25 mm, the integrated connectivity and potential for any project involving the internet of things.

The fundamental step toward change began with the Maker Faire in Rome, where it was done with the Arduino Pro Development Environment, a definite step ahead of the Arduino IDE. Despite this, the Arduino team also made available in Altium Designer a series of symbols to reduce the time between prototyping and production.

Today, with the new Portenta H7 module, we are preparing for a new maker market. The module is able to run Arduino code natively and can support running Arduino code on the open-source IoT Arm Mbed OS to provide enterprise-grade features while maintaining the familiar Arduino development environment. In addition, it can run Python and JavaScript code, making it more accessible to a wider range of developers.

Portenta H7 has low-power cores capable of processing video from a camera and displaying it on the USB-C connector with DisplayPort. It also has the ability, through the M4 cortex, to perform system tasks such as sensor acquisition and power management. In its complete configuration, Portenta H7 features 32 Mbytes of SDRAM in addition to 1 MB of processor, 128 MBytes of flash in addition to 2 MB of processor, and Ethernet, high-speed USB, Wi-Fi, and Bluetooth 5.0 (Figure 1).

The wireless module can manage the protocols simultaneously. The Wi-Fi interface can be used as an access point, and Bluetooth supports Bluetooth Classic and BLE. The MKR form factor ensures scalability for a wide range of applications by merely updating the Portenta board to the one suitable for your needs.

Figure 1: Arduino Portenta [Source: Arduino]
Figure 1: Arduino Portenta (Source: Arduino)

“Portenta H7 is the perfect match for crossover applications where considerable computing power is required, but power constraints are very tight,” said Fabio Violante, CEO of Arduino. “Applications include machine learning/AI, motor control, IoT gateways, edge computing, human-machine interfaces, and more.”

The module is directly compatible with most Arduino libraries and can run TensorFlow Lite, JavaScript, MicroPython, Mbed OS, and, of course, Arduino. This means that the solution is able to perform real-time tasks without the need to run real-time operating systems. Cortex M7 has more computational power than most Linux-based processors but consumes even less than some other microcontrollers. At the same time, the M4 core can be used to reduce power consumption further and perform additional tasks without the complexity of multitasking.

“The scalability of the board allows, for high-volume applications, custom-tailoring the cost/feature balance, providing a solution to every need,” said Fabio. “Last but not least, all these features are going to be available through the renowned Arduino simplicity.”

The new Portenta family has been designed to offer scalable processing with complex technologies while maintaining a small footprint. The high number of pins allows reducing the size of the final application while offering good robustness and signal integrity.

Piezo Haptic Feedback to Enhance Drivers’ Safety

By Anne-Françoise Pelé

From turning the steering wheel to pressing down the accelerator pedal, the driving experience is very tactile. Haptic technologies simulate the sense of touch by triggering forces, vibrations or motions to the drivers, and are increasingly used by the automotive industry as a way to confer a safer, more informed and more intuitive user experience. 

Boréas’ BOS1211

Boréas Technologies Inc. (Bromont, Canada) is rolling out what it claims is the first low-power high-voltage piezoelectric driver IC to enable high-definition haptic feedback in automotive human machine interfaces. The BOS1211 IC has been developed using Boréas’ patented CapDrive technology, a scalable high-voltage piezoelectric driver architecture that takes advantage of the piezoelectric material. This approach compares with the more traditional haptic actuators: eccentric rotating mass (ERM) motor and linear resonant actuator (LRA). 

“When you try to generate high voltage in consumer or in car applications, if you don’t have good electronics, you will basically burn all your energy to increase that voltage, and that will make the overall system inefficient even though the actuator itself is efficient,” Simon Chaput, Boréas’ founder and CEO, told EE Times. “We solved that problem, and created a driver that takes the low voltage and generates the required high voltage.” The BOS1211 has been designed to support TDK’s family of 120V PowerHap piezoelectric actuators. 

Preventing driver’s distraction
More than 52.8 million automotive touch panels will be on the market by 2020, according to IHS Markit Center Stack Display Production Forecast, and that number is increasing by 4.6 percent annually. Safety is one of the key drivers for adoption.

In a car, said Chaput, touchscreens are more efficient than the voice control to do quick tasks. The main issue with touchscreens, however, “is the time to look at it is too long because we wait for confirmation”. Haptics enables the user to touch the screen like a traditional button interface, which means “you can slide your finger across the screen, feel different textures until you reach the button, then press on the button area and feel the mechanical click that tells you that the button has been pressed, the command has been registered and whatever action you have ordered will take place in the next few seconds.” The whole idea is to keep the driver’s eyes on the road and hands on the wheel as much as possible.

Haptic feedback in vehicles is becoming a lot more popular as the user experience continues to evolve inside of the vehicle,” commented Kyle Davis, Technical Research and Data Analyst for IHS Markit. “Haptic feedback is not just limited to a user experience though, as it can help keep the driver’s eyes on the road and reduce driver distraction.” 

Within the automotive cabin, haptic technologies can be used to trigger different warnings to the driver without adding stress to the visual and auditory loads. Touchscreen is one application, steering wheel is another, said Chaput. “You can think of the control buttons on the steering wheels, but also if you drive off your lane, the wheel will vibrate and tell you that you need to recover control of your car.” Speeding warnings can also be triggered in the pedal to alert the driver that he or she is exceeding the speed limit. Same thing with the seat and belt where actuators could be used to help improve the driver’s awareness of his or her surroundings. 

Haptics technologies differ from audio and visual technologies. They are time sensitive and require continuous bidirectional data sharing. “It is important that the haptic feedback arrives at the right time,” said Chaput. “If you are delayed by more than 10 or 20 milliseconds, it is going to feel wrong from a human perspective.” With its latest BOS1211 IC, Boréas claims it has a good time resolution for touch sensing. “We use the piezo actuators as both the actuator and the sensor. Our chip has both the sensing and driving capability and can take the decision by itself.” As soon as a pressure is sensed, the haptic feedback is automatically sent. There is no delay, and the haptic feedback arrives at the right time, said Chaput. 

Removing mechanical buttons. Really?
At last year’s CES, Boréas demonstrated a buttonless smartphone, replacing the mechanical volume and power buttons with two piezo actuators next to the frame. While customers tend to prefer buttonless smartphones and tablets, expectations may be different within the car cabin. “I think a common misconception is that haptic feedback will replace buttons and knobs inside the vehicle,” commented IHS Kyle Davis. According to IHS Markit User Experience Consumer Survey, “most new vehicle buyers are looking for a mix between a touchscreen and buttons, not one or the other.” 

Enabling new human-machine interfaces in cars
In May 2019, Boréas and TDK announced their collaboration to accelerate the adoption of piezo haptic solutions in applications such as automotive displays and controls, wearables, smartphones and tablets. At the time, Boréas’ first product, the BOS1901 power-efficient piezo driver IC, was seen as a good fit for TDK’s PowerHap actuators with an operating voltage of up to 60V. Partners also planned to develop the first low-power piezo driver IC for the larger members of the PowerHap family with maximum drive voltages of 120V. That’s the BOS1211. 

“We realized that together we had the best driver and the best actuator for haptics, but what our customers want is more a solution,” said Chaput. “We can move faster than if we were working on our own. In the long term, we can do roadmap alignment, meaning that we can make sure that the drivers we are designing and the piezo actuators TDK is developing are aligned together.” 

Simon Chaput, Boréas’ CEO and founder

Asked about the main differences between the BOS1091 and the BOS1211 ICs, Chaput said Boréas has improved the sensing interface to be more precise and autonomous. “On the BOS1901, the sensing is done by our chip but it requires some software while on the BOS1211 most of the sensing is handled by the chip itself,” which makes it more autonomous. Also, the BOS1901 was created for smart watches and phones, whereas the BOS1211 is made for large displays in cars. It meets the AEC-Q standard and is qualified for automotive applications. 

Both the BOS1901 and BOS1211 ICs have been developed using Boréas’ patented CapDrive technology. Explaining what’s unique about it, Chaput first cited high force haptic feedback, thus the ability to move large screens and deform stiff materials. A second benefit is the fact that piezo actuators can be made in customizable form factors. “Typically, for LRA actuators, when you increase the size, it increases the dimension, and you end up with a big rectangular block.” Piezo actuators, he continued, “can be manufactured so that they fit well within the module, providing a better integration in the car.” 

Boréas also claims the BOS1211 is the industry’s smallest haptic driver (4 x 4 mm QFN) for this level of haptic feedback strength and consumes “one tenth of the energy required by our competitors.” This low power solution, Chaput noted, enables car manufacturers to design haptics the way they want it to be without the risk of using too much power. 

Educating, sampling, scaling

On the electronics side, Chaput said Boréas is at a point where “we have a good solution that makes it viable on the market.” The next step consists in “working with our customers, because most customers have expertise into actuating ERM and LRA, and the piezo actuator is quite different.” Of course, Boréas keeps a close eye on the competition to see what’s coming down the line and support the markets where, as a startup, there are the most promising outcomes. Texture-based haptics “is something we will look into in the future,” said Chaput. “We would need to find the right partner.” 

Again, today’s priority “has to do with teaching our customers so that they can do the same kind of actuating they are doing with LRA today with piezo tomorrow.” 

The BOS1211 is now sampling to key customers, “we now have multiple projects with multiple large OEMs”. To accelerate design to production with the TDK 120V PowerHap actuators, Boréas said it will offer a plug-and-play development kit for piezo haptic feedback in February 2020. Chaput said the company expects to launch production of the BOS1211 chip in 2021 and start volume production in the years 2022-2024. 

Asked whose foundry is producing the BOS1901 and will produce the BOS1211, Chaput just said Boréas has “a very good partner.” 

Looking ahead
Chaput is confident haptics will likely touch most aspects of our lives, especially in the consumer and automotive spaces. “If you take one day, you use your hands quite often to provide good information on your surroundings.” But what happens when you shift to smartphones? “Most of that information has been digitized, and you can see it in full HD, 4K or even 8K.” You see, but not feel. “In the past twenty years, we have lost the touch feeling, because we are mostly interacting with glass that always feels the same. Haptics will bring back that touch information.”

Related content: Taking Touch Technology to the Next Level

New TI Processors Target ‘Practical’ ADAS

By Junko Yoshida

LAS VEGAS — Texas Instruments is introducing at the Consumer Electronics Show this week ADAS and gateway processors — TDA4VM and DRA829V — built on TI’s latest Jacinto platform and designed to enable mass-market ADAS vehicles.

 

This move underscores the decision by several leading car OEMs to scale back from an original commitment to pioneer fully autonomous vehicles.

 

In a recent interview with EE Times, Curt Moore, general manager and product line manager for Jacinto processors, acknowledged that TI, too, faced the dilemma of “where we want to invest our time” for its next-generation automotive processors. TI’s emphatic answer was to design auto-grade processors that can address “edge, safety, and security” but zero in on “semiconductor affordability and accessibility.”

 

“We wanted to develop automotive processors that are scalable and applicable to a wider set of vehicles, including low-cost and affordable cars for younger drivers and those with low income,” explained Moore.

 

ADAS and gateway processors

TDA4VM processors are for ADAS, while DRA829V processors are developed for gateway systems “meeting with all the plumbing requirements,” noted Moore. They include specialized on-chip accelerators, according to TI, to expedite data-intensive tasks.

 

Both TDA4VM and DRA829V processors also incorporate a functional safety microcontroller so that OEMs and Tier One suppliers can “support both ASIL-D safety-critical tasks and convenience features with one chip,” said TI.

 

Perhaps most importantly, both the ADAS and gateway processors share one software platform. Moore said, “Developers can use the same software base from high-end to low-end vehicles.”

 

Asked about TI’s two new processors, Phil Magney, founder and principal at VSI Labs, told EE Times, “I see them as great companions, as both are necessary to support the latest trends in software-defined architectures. Together, these processors can take care of the heavy processing requirements of automated driving.”

 

Magney explained, “The environmental modeling gets very processor-intensive when you consider all the inputs necessary to support the task in real time. Furthermore, you need the data capacities, timing, and synchronization of all the sensor data. On top of this, you need safety and security, which are built into these chips.”

 

The right level of autonomy?

With the new processors, TI hopes to enable the right level of autonomy in new vehicles.

 

Calling Level 4 and Level 5 cars “still in the development stage,” Moore pointed out “corner cases” that fully autonomous vehicles have yet to solve and “well-defined use cases” [and operational design domains] that must be spelled out for higher-level autonomous vehicles. Given these challenges to full autonomy, Moore said, “This will be a slow journey” from the current Level 2 and Level 2+ vehicles.

 

TI, however, isn’t swearing off of higher-level ADAS functions. Indeed, TI’s TDA4VM is designed to achieve much better visibility at speeds necessary for on-chip analytics.

 

Specifically, the TDA4VM supports high-resolution 8-megapixel (MP) cameras that see farther, possibly even in fog and rain. TDA4VM processors can also simultaneously operate four to six 3-MP cameras.

 

Sameer Wasson, vice president and general manager of TI’s processor business unit, told EE Times that the new ADAS processors are also capable of fusing other sensors — including radar, LiDAR, and ultrasonic. “Our goal is to enable carmakers and Tier Ones to develop scalable but practical cars.”

TI’s new ADAS processor TDA4VM is not only highly integrated but also capable of fusing a variety of sensory data. (Source: TI)

 

Magney believes that the TDA4VM is scalable in the sense that it can “handle full 360° situational awareness for high-end ADAS or automated driving applications.” 

 

Beyond the ADAS processor’s ability to efficiently manage multilevel processing in real time, the key is that it can do the job within the system’s power budget. “Our new processors execute high-performance ADAS operations using just 5 to 20 W of power, eliminating the need for active cooling,” TI claimed.

 

Deep learning

TI also claimed that the latest Jacinto platform brings enhanced deep-learning capabilities. Noting that the platform offers full programmability, Moore said, if OEMs or Tier Ones plan to set up their own vision/camera/sensor fusion, the SoC allows their own perception.

 

A few analysts, however, are frustrated with the scant details that TI has provided for its ADAS processors. “Now TI says the TDA4VM can handle deep learning, but they don’t disclose any specs or details, let alone its performance,” said Mike Demler, a senior analyst at The Linley Group. Asked how TDA4VM might fare against Intel/Mobileye’s EyeQ chips, he said, “Now TI mentions AEB [automatic emergency braking] and self-parking, which require at least [Mobileye’s] EyeQ3 capabilities. But again, how much performance? We don’t know.”

 

VSI Labs’s Magney also noted that it won’t be easy to compare TDA4VM with Mobileye’s EyeQ chips. He noted, “Mobileye’s tight integration of processor and algorithms makes them a strong incumbent in the field.” TI’s edge might be that “as the industry moves from ADAS to automated driving, OEMs will desire more freedom to develop their own algorithms.”

 

Software-defined car

TI, too, is keeping in check carmakers’ desire to enable over-the-air (OTA) updates — with a goal to make software-defined cars possible.

 

“OTA isn’t generally possible without giving architecture upgrades inside a car,” observed Moore. Given the criticality of secure connectivity necessary for software updates, “I don’t see car OEMs going for OTA without a gateway processor or with just a legacy dumb MCU,” he added.

 

To that end, Moore described TI’s DRA829V processor as offering carmakers “a huge step function in the beginning of their journey to secure OTA.”

 

TI noted that new gateway processors “manage higher volumes of data and support evolving requirements for autonomy and enhanced connectivity.”

TI claims that it is the first to integrate the PCIe and GbE/TSN into its gateway processor. (Source: TI)

 

TI also touted the DRA829V processor as “the first in the industry to incorporate a PCIe switch on-chip in addition to integrating an eight-port gigabit TSN-enabled Ethernet switch for faster high-performance computing functions and communications throughout the car.”

 

So how big a deal is it for TI to integrate the PCIe and GbE/TSN into its gateway processor DRA829V?

 

Demler said, “Looks like it has an eight-port switch, which is more than what’s offered by NXP’s recently announced S32G’s 2x switch.” But, he added, the DRA829V processors don’t exactly match up with NXP’s S32G, which was designed as a full-fledged network processor.

 

But on a higher level, both NXP and TI are addressing the same trends in automotive architecture, Magney summed up. “You have massive amounts of data to handle and you need the plumbing to support that.”

 

Availability

TI’s Moore noted that both TDA4VM and DRA829V samples have been already in the hands of a large number of customers since May.

 

According to TI, “Developers can get started immediately with Jacinto 7 processors development kits and buy the new TDA4VMXEVM and DRA829VXEVM evaluation modules on TI.com for $1,900 each.”

 

Pre-production TDA4VM and DRA8329V processors are available now, only from TI, at $97 in 1,000-unit quantities. Volume production is expected to be available in the second half of 2020.

Ad block