Artificial Intelligence (AI) Powers a New Era of Intelligent Embedded Computing

Publish Date:
January 15, 2022

AI-based computing is enabling multiple levels of insights and safety advancements throughout the embedded computing industry. We’re seeing a huge increase in a need for high computation systems that operate in challenging environments, and its AI-based platforms that can handle the processing requirements that enable object detection and tracking, video surveillance, target recognition and condition-based monitoring.

Operating systems based on AI computing provide optimized visualization capabilities to combine video and other vision sensors into one unified viewer application, which can subsequently be utilized for simultaneous localization and mapping of robots.

This sets the stage for more intuitive applications, such as human pose estimation to train robots to follow trajectories, which eventually can be used in autonomous navigation systems, as well as facial feature extraction in automated visual interpretation, human face recognition and tracking. These activities are designed to enhance security and surveillance, motion capture and augmented realty (AR).

Operational Intelligence Across Complex Environments

Complex GPGPU inference computing at the edge is enabling this visual intelligence, as well, including high-resolution sensor systems, movement tracking security systems, automatic target recognition, threat location detection and prediction. Areas like machine condition-based monitoring and predictive maintenance, semi-autonomous driving and driver advisory systems are also relying on the parallel processing architecture of GPGPU.

Much of the high compute processing taking place within these critical embedded systems relies on NVIDIA compact supercomputers and their associated CUDA cores and deep learning SDKs used to develop data-driven applications.  Traffic control, human-computer interaction, and visual surveillance well as rapid deployment of AI-based perception processing are all areas where data inputs can be turned into actionable intelligence.

Processing that Surpasses Convention

The NVIDIA Jetson AGX Xavier sets a new bar for compute density, energy efficiency and AI inferencing capabilities on edge devices. It is a quantum jump in intelligent machine processing, marrying the flexibility of an 8-core ARM processor with the sheer number crunching performance of 512 NVIDIA CUDA cores and 64 Tensor cores.

With its industry leading performance, power efficiency, integrated deep learning capabilities and rich I/O, Xavier enables emerging technologies with compute-intensive requirements. Elma’s new Jetsys-5320, for example, employs the Xavier module to meet the growing data processing needs of extremely rugged and mobile embedded computing application. It easily handles data-intensive computation tasks and provides for deep learning (DL) and machine learning (ML) operations in AI applications.

What’s Driving the Data Push

Speeds are increasing, causing board and backplane suppliers to produce new designs capable of 25 Gb/s per lane that support high speed PCIe Gen 3 and Gen 4 designs.  Sensors will also start to make use of 100 Gbe to transfer in and between chassis.

When a system is capable of running high performance deep learning-based inference engines, it can reliably perform advanced data and video processing tasks such as object detection and image segmentation of multiple video image streams captured through HD-SDI, Ethernet and USB3.0 cameras, and the like, interfaced through high-speed circular connectors.

Newer software environments will lead to replaceable accelerators and GPGPUs amongst suppliers. In open standards-based environments like The Open Group’s Sensor Open System Architecture™ (SOSA) initiative, high bandwidth local connections required between SBCs and GPGPUs, where two plug in cards (PICs) may form one SOSA module, may need to be scaled to meet growing data needs.

Rugged AI for Tomorrow’s Military Advantage

Today’s rugged embedded systems designers are craving mission-critical SFF autonomy with server-class AI processing to deploy in remote locations and overcome challenging connectivity. These systems need real-time responsiveness, minimal latency and low power consumption.  Advanced AI systems that facilitate data processing from the edge to across the cloud redefine the possibilities for using rugged, compact technologies in autonomous, harsh and mobile environments.

Downloads

Read More Blog Posts

Why Development Chassis are Critical to Implementing Systems Aligned to SOSA

Why Development Chassis are Critical to Implementing Systems Aligned to SOSA

Looking back we can now see a shift in how development platforms are designed and how they are used by our integrator customer base. That shift is making it easier and less expensive to perform the development stages of a deployable system project and put solutions into the hands of the warfighter faster than ever before. Development hardware can also be shared between projects, or inherited by subsequent projects. This saves not only on lab budget, but the time to order and receive all new hardware for a new development project.

Upgrades and Enhancements for Legacy VME and CPCI Ethernet Switches

Upgrades and Enhancements for Legacy VME and CPCI Ethernet Switches

In the past few years, several end-of-life (EOL) announcements in the embedded computing market have both caused angst and opportunity. Making the shift away from a tried-and-true solution always brings with it the need to review not only the mechanical elements of an embedded system, but the integration and networking elements as well. And when that review is forced upon a designer, as in the case of an EOL announcement, it may mean forced choices of not-as-optimum alternatives. Or it could be something different altogether.