InfiniBand is a point-to-point high-speed switch fabric interconnect architecture that features built-in Quality of Service (QoS), fault tolerance and scalability. The InfiniBand Architecture (IBA) Specification defines the interconnect (fabric) technology for interconnecting processor nodes and I/O nodes to form a System Area Network that is independent of the host operating system and processor platform. InfiniBand defines 1X (2.5 Gb/s) links, 4X (10Gb/s) links, and 12X (30Gb/s) links. It also uses IPv6 as its native network layer. MellanoxTM Technologies has been shipping 10Gb/s (or 4X links) InfiniBand capable silicon since early 2001.
This technology was created by the InfiniBand Trade Association (IBTA), which includes all of the major server vendors and Microsoft® to provide a logical successor to the shared PCI bus on servers. While creating a new I/O for servers, the IBTA determined they could also create a highly reliable fabric for data centers based on the same technology. Therefore, InfiniBand extends out of server motherboards over copper or fiber links as a new interconnect (called a "fabric") for data centers. InfiniBand is built from the ground up for Reliability, Availability, and Serviceability (RAS) for Internet and Enterprise data centers.
View Entire Paper | Previous Page | White Papers Search
If you found this page useful, bookmark and share it on: