Back in January of this year, we acquired a company named Virtensys, along with a technology that enables I/O virtualization (IOV) through PCIe sharing. This has been an exciting acquisition for us because of the opportunities that the technology presents to take full advantage of SSD performance in a shared enterprise environment. So for the next couple of posts, I’m going to talk about why this technology is so beneficial to a data center, including an overview of just how PCIe sharing works with this technology.
In a typical datacenter, each server has dedicated I/O cards such as Ethernet cards and fibre channel HBAs that are connected to switches through physical cables. I/O bandwidth is sized according to the peak bandwidth required—even when that peak bandwidth is only required for a very short period of time in a day, or even in a week.
IOV moves the dedicated I/O cards from individual servers into an appliance that allows the cards to be shared by multiple servers. Servers are connected to the appliance through a single high-speed, low-latency PCIe link, reducing the amount of cabling by at least 50%.
In the end, IOV reduces power consumption and cooling costs, simplifies resource management, reduces cabling infrastructure, and improves utilization of resources in the data center. In many cases, IOV also eliminates the need to physically reconfigure I/O resources, which can now be managed using configuration software. The best part is that Micron’s implementation of IOV is fully transparent to the server, so it doesn’t require any changes to server drivers or management tools.
Check out our new I/O Virtualization Innovations page to learn more about the benefits of IOV.
Next up—I’ll talk more about how you can use PCIe Sharingto take full advantage of SSD performance.