Published on Aug 15, 2016
InfiniBand is a powerful new architecture designed to support I/O connectivity for the Internet infrastructure.InfiniBand is supported by all the major OEM server vendors as a means to expand beyond and create the next generation I/O interconnect standard in servers.
For the first time, a high volume, industry standard I/O interconnect extends the role of traditional "in the box" busses. InfiniBand is unique in providing both, an "in the box" backplane solution an external interconnect and "Bandwidth Out of the box", thus it provides connectivity in a way previously reserved only for traditional networking interconnects. This unification of I/O and system area networking requires a new architecture that supports the needs of these two previously separate domains.
Underlying this major I/O transition is InfiniBand's ability to support the Internet's requirement for RAS: reliability, availability, and serviceability. This white paper discusses the features and capabilities which demonstrate InfiniBand's superior abilities to support RAS relative to the legacy PCI bus and other proprietary switch fabric and I/O solutions. Further, it provides an overview of how the InfiniBand architecture supports a comprehensive silicon, software, and system solution.
The comprehensive nature of the architecture is illustrated by providing an overview of the major sections of the InfiniBand 1.0 specification. The scope of the 1.0 specification ranges from industry standard electrical interfaces and mechanical connectors to well defined software and management interfaces.
Amdahl's Law is one of the fundamental principles of computer science and basically states that efficient systems must provide a balance between CPU performance, memory bandwidth, and I/O performance. At odds with this, is Moore's Law which has accurately predicted that semiconductors double their performance roughly every 18 months.
Since I/O interconnects are governed by mechanical and electrical limitations more severe than the scaling capabilities of semiconductors, these two laws lead to an eventual imbalance and limit system performance. This would suggest that I/O interconnects need to radically change every few years in order to maintain system performance. In fact, there is another practical law which prevents I/O interconnects from changing frequently - if it am not broke don't fix it.
Bus architectures have a tremendous amount of inertia because they dictate the bus interface architecture of semiconductor devices. For this reason, successful bus architectures typically enjoy a dominant position for ten years or more.
The PCI bus was introduced to the standard PC architecture in the early 90's and has maintained its dominance with only one major upgrade during that period: from 32 bit/33 MHz to 64bit/66Mhz. The PCI-X initiative takes this one step further to 133MHz and seemingly should provide the PCI architecture with a few more years of life. But there is a divergence between what personal computer and servers require.
Personal Computers or PCs are not pushing the bandwidth capabilities of PCI 64/66. PCI slots offer a great way for home or business users to purchase networking, video decode, advanced so unds, or other cards and upgrade the capabilities of their PC. On the other hand, servers today often include clustering, networking (Gigabit Ethernet) and storage (Fibre Channel) cards in a single system and these push the 1GB bandwidth limit of PCI-X.