Traditionally, advanced embedded systems have focused on CPU and memory speeds, which outpaced improvements in storage speed. Conventional thinking considers the storage device a bottleneck in system I/O performance, driving a need for storage devices with faster I/O speeds. Linux is a widely used embedded OS that also manages block devices such as e.MMC, UFS and SSD. At a high level Linux deals with I/O requests from the user space and explores the parameters that impact access performance. The current Linux storage stack is based on legacy HDD, and some of its mechanisms benefit legacy storage devices. Our tests show that new-generation, flash-based storage devices have a negative impact on I/O performance and that software overhead has increased as a proportion of system overhead.
Higher-performance storage devices translate to higher costs, and in our white paper: Linux® Storage System Analysis for e.MMC with Command Queuing, the available bandwidth is often consumed by the storage software rather than being passed to the end user. This study also indicates a need for optimization of the existing storage stack, including VFS optimizations and direct I/O and I/O scheduler improvements because the existing Linux I/O stack is an obstacle for realizing maximum performance from flash-based, high-speed storage devices.