DESIGN TOOLS
company

Enable a digital twin with the correct memory strategy

Wil Florentino | March 2023

IDC predicts that from 2021–2027, the number of new physical assets and processes modelled as digital twins will increase from 5% to 60% . Although the concept of digitalizing key elements of an asset’s behavior is not entirely new, the ability of various aspects of the technology – from precise sensing to real-time compute to improved extraction of insights from large amounts of data – are all aligning to make machines and systems of operations more optimized and help accelerate scale and time to market. In addition, enabling AI/ML (artificial intelligence/machine learning) models will help to improve process efficiencies, reduce product errors and deliver excellent overall equipment effectiveness (OEE).

Once we understand the challenges and the complexity of these requirements, we will begin to realize how important memory and storage are for enabling a digital twin.

Extracting the right data is the first challenge

Designing a digital twin is not just the isolated sensing of physical characteristics. It is also the ability to model against the interaction between external and internal subsystems as well. For example, sensing the harmonic profile of vibration of a generator should also lead to insights about how that image can be correlated with the physics of the motor, bearings, belts, and the impact to that interaction. If one truly wants to build a ‘digital twin’ of a machine, simply installing sensors all around it without any sense of value interdependence will not give an accurate ‘twin’.

Brown field adoption also makes this complicated considering adding new sensors to a machine already operating is not that simple. In fact, the first stab at proofs of concept are adding a DIY or embedded boards that have the minimal interface to support a sensor-to-cloud data conversion. It is one thing to add the connectivity piece, but entirely different to do the actual modelling where you need to be able store dynamic data and compare that to your trained model. Moreover, this approach is certainly not the most scalable solution – considering the tens or hundreds of types of systems that you want to model.

Compute will continuously evolve

New processor architectures that have built in CNN (convolutional neural network) accelerators are a good first step at enabling faster inference compute. These devices are equipped to not just ingest analogue signals but to process, in-device, and filter out the noise of the data and allow for values that are relevant for the model. These are well tailored for intelligent endpoints with parallel operations in the GFLOPS (gigaflops per second) range to approximately less than 20 TOPS (tera operations per second).

Lower cost, low power GPUs are also critical as they provide hardware-based ML compute engines that will inherently be more agile, as well as offer the compute power for higher OPS (operations per second). Industry sees the implementation of edge purposed GPUs that are less than 100 TOPS or more infrastructure class GPUs of over 200+ TOPS.

Low power DRAM memory is ideal for AI accelerated solutions

As you can imagine, depending on the architecture – multi-core general purpose CPUs with accelerators may require a memory width of x16, x32 bits, and higher-end GPUs could require up to x256 bit width IO.

The direct concern is that if you are moving gigabytes of data to or from the external memory for the computation, you will need higher bus width performance from the memory. The table below shows the performance requirements for memory interface based on INT 8 TOPS requirements.

Memory is keeping up with AI accelerated solutions by evolving with new standards. For example, LPDDR4/x (low-power DDR4 DRAM) and LPDDR5/x (low-power DDR5 DRAM) solutions have significant performance improvements to prior technologies.

 

AI Accelerated solutions infographic showing data rate graph reduced power consumption graph

LPDDR4 can run up to 4.2 Gbps and support up to x64 bus width. LPDDR5x offers a 50% increase in performance vs LPDDR4, doubling the performance as much as 8.5Gbps. In addition, LPDDR5 offers 20% better power efficiency than LPDDR4X. These are significant developments that will improve overall performance and will match the latest processor technologies.

Embedded storage follows machine learning complexity

It is not enough to think that compute resources are limited by the raw TOPs of the processing unit, or the bandwidth of the memory architecture. As machine learning models become more sophisticated, the number of parameters for the model are expanding exponentially2 as well.

Machine learning models and datasets expand to obtain better model efficiencies so there will be a need for higher performing embedded storage as well. Typical managed NAND solutions such as eMMC 5.1 with 3.2Gb/s are ideal for code bring up but also for remote data storage. Newer technologies such as UFS interfaces can run 7x to 23.2 Gb/s to allow for more complex models.

These embedded storage technologies are also part of the machine learning resource chain.

Enable a digital twin with the right memory

Industry knows that edge endpoints and devices will be generating terabytes of data, not just because of its fidelity, but the need to ingest data will help to improve digital models – exactly what a digital twin will need.

In addition, code will need to scale not just for the management of data streams, but also of the infrastructure for edge compute platforms – and adding XaaS (as a service) business models.

Digital twin technology has great potential. But if you do a ‘twin’ analogous to modelling just one ‘nose’ or ‘eye’ of a face, it will be hard to determine if this is your twin without the full image of the face. So, next time you want to talk about a digital twin, know that there are a lot of considerations including what to monitor, and also how much compute memory and data storage this will need. Micron, as a leader in industrial memory solutions, offers a broad range of embedded memory including our 1-alpha technology-based LPDDR4/x and LPDDR5/x solutions for fast AI compute, and our 176-layer NAND technology embedded into our eMMC and UFS enabled storage solutions. These memory and storage technologies will be key to getting you the computational requirements you need.

1.  IDC FutureScape, 2021

2. “Parameter Counts in Machine Learning” (Toward Data Science), 2021

Sr. Segment Marketing Manager

Wil Florentino

Wil Florentino is a Sr. Segment Marketing Manager for the Industrial Business Unit at Micron Technology. His role includes providing market intelligence and subject matter expertise in Industrial segments such as IIoT and industrial edge computing in support of new product roadmap memory solutions. Mr. Florentino has over 20 years of experience in embedded semiconductor technologies – from SoCs, FPGAs, microcontrollers, and memory, primarily focused on industrial applications.