Comparative I/O Analysis - InfiniBand compared with other I/O technologies

There has been much discussion on how InfiniBandTM complements, replaces or compares with a variety of I/O standards. This white paper provides a basic understanding of each I/O technology, how each compares with InfiniBand, and it's position in the market with reference to InfiniBand. A matrix of features incorporated within these technologies is included in this document.

InfiniBand is a point-to-point high-speed switch fabric interconnect architecture that features built-in Quality of Service (QoS), fault tolerance and scalability. The InfiniBand Architecture (IBA) Specification defines the interconnect (fabric) technology for interconnecting processor nodes and I/O nodes to form a System Area Network that is independent of the host operating system and processor platform. InfiniBand defines 1X (2.5 Gb/s) links, 4X (10Gb/s) links, and 12X (30Gb/s) links. It also uses IPv6 as its native network layer. MellanoxTM Technologies has been shipping 10Gb/s (or 4X links) InfiniBand capable silicon since early 2001.

This technology was created by the InfiniBand Trade Association (IBTA), which includes all of the major server vendors and Microsoft® to provide a logical successor to the shared PCI bus on servers. While creating a new I/O for servers, the IBTA determined they could also create a highly reliable fabric for data centers based on the same technology. Therefore, InfiniBand extends out of server motherboards over copper or fiber links as a new interconnect (called a "fabric") for data centers. InfiniBand is built from the ground up for Reliability, Availability, and Serviceability (RAS) for Internet and Enterprise data centers.

View Entire Paper | Previous Page | White Papers Search

If you found this page useful, bookmark and share it on: