Virtualization Improves Efficiency of Legacy Military Embedded Systems
Originally Published in MILITARY EMBEDDED SYSTEMS
Virtualizing legacy embedded systems improves their performance, efficiency, and security, as well as helping to meet size, weight, and power requirements for military aircraft and ground vehicles.
Benefits of Virtualization for Embedded Systems
Virtualization offers many benefits for embedded systems – especially legacy military ones. “Virtualization is an amazing technology,” says Chris Ciufo, chief technology officer for General Micro Systems. “While it’s not new, it’s only within the past five to 10 years that processors and the systems that run them have had enough performance and resources so that when you virtualize you still have enough processing capability left over to do other things.”
The overall appeal of virtualizing legacy systems is that “you can have essentially the same legacy system, which looks like it’s still running on the older system, processor, and environment, but it’s now running on a shiny new processor and system that can also be doing many other things,” Ciufo says. “This means that legacy systems can be kept alive a lot longer without many changes – saving the government, the contractor, the prime, etc. the cost of recertifying a system. It’s a tremendous benefit to the defense industry because it often costs more to recertify a system than to redesign it. Virtualization is a boon for modern defense systems when it can be used.”
Emerging Trends
Another emerging trend in virtualization now is to add peripherals “that hadn’t previously been accessible to the microprocessor,” Ciufo says. “Early on, virtualization primarily relied on multicore processors to run multiple synthetic environments – typically one per core. Then Ethernet ports were virtualized so that eight Ethernet ports can be shared with four virtualized environments, which makes it look like 32 ports are available to the system.”
Ciufo says he is also noticing an effort to virtualize other processing resources and systems like general-purpose computing on graphics processing units (GPGPUs) or other digital signaling assets in the system known as coprocessors, which include algorithm processors, artificial intelligence processors, and vector processors. “These high-performance resources to the system do a lot of computational work,” he notes. “So the trend now is to virtualize algorithm processors like digital signal processors, GPGPU processors, and other compute resources that previously were dedicated to a processor but now have enough horsepower to also be virtualized and shared between synthetic environments.”
This is significant, according to Ciufo, because these coprocessing algorithm resources tend to be bolted to only one part of the system. It will “require new software to be written, and virtual environment providers will need to describe within their software how they plan to deal with talking to data moving to and from those virtualized resources,” he says. “Since these are high-performance resources that work very quickly in terms of data throughput and movement, it also requires the virtualization companies that provide the software to rethink how they deal with their own passing of data in and out of the virtualized environment – including the interrupts that are required to deal with those resources. So it’s not a trivial task, but we’re definitely seeing a trend of using high-powered coprocessor compute resources and virtualizing them too.”
In this Military Embedded Systems article, GMS CTO Chris Ciufo shares why virtualization legacy systems “is a boon for modern defense systems when it can be used.”