96-Port 10Gb/sec InfiniBand Switch Demonstrated by Mellanox

3/13/2003 - MellanoxTM Technologies Ltd., the leader in InfiniBandSM silicon, announced it has enabled a high port count 10Gb/sec (or 4X) InfiniBand modular switch with robust management features for up to 96 ports. This design meets the needs of the High Performance Computing Cluster (HPCC) market for high port count switches enabling full bandwidth InfiniBand server clusters, ranging from a modest 32 or 64 nodes to many thousands of nodes. This switch platform, based on Mellanox InfiniScaleTM silicon, is being demonstrated at CeBIT in Hanover Germany from March 12-19 in the MEGware Computer exhibit, Hall 2 booth #A47.

"End customers developing HPCC have made it clear that the market requires high port count switches to meet their needs," said Eyal Waldman, Chairman and CEO Mellanox Technologies. "Our InfiniScale device family and the InfiniBand architecture is designed to scale to large cluster sizes and our demonstration of this switch validates that InfiniBand meets the needs of HPCC and other markets."

The modular design features a 7U chassis that accommodates up to 8 leaf boards with 12 10Gb/sec ports each that are arranged in a CBB (Constant Bi-sectional Bandwidth) or fat tree topology. This architecture is a common requirement in HPCC systems and enables full 10Gb/sec bandwidth between any two ports. The switch chassis includes hot swappable and redundant power supplies, as well as, InfiniBand port management and chassis management features such as fan monitoring and thermal sensing. Multiple switches may be configured into larger CBB topologies supporting 4,096 or more nodes in a single HPC clustered system. The inherent low latency capabilities of the InfiniBand architecture enable switching latencies that are up to 30 times better than other 10Gb/sec switching technologies.

"MPI Software Technology is recording over 820 MBytes/sec of point-to-point MPI-level communication bandwidth over our MPI/Pro product with world-class latencies," said Dr. Anthony Skjellum, CTO of MSTI. "We are extremely impressed with our InfiniBand benchmark results and with the new high port count switch that Mellanox is enabling, we are convinced it makes InfiniBand the future interconnect choice for HPCC."

InfiniBand products are delivering breakthrough performance by taking advantage of the Mellanox InfiniHostTM and InfiniScale devices that provide up to 8 times the bandwidth of gigabit Ethernet and up to three times the bandwidth of existing proprietary HPCC interconnect technologies. For the first time, InfiniBand technology combines the bandwidth, latency, and performance capabilities demanded by the HPCC market with an open industry standard architecture supporting multiple application protocols for enterprise, storage, embedded and data center computing. InfiniBand offers superior clustering performance while delivering the economies of scale that only an industry standard can provide.

Some of the features of this new switch design include:

High Availability
This switch platform offers a highly available architecture designed for optimal data reliability in an InfiniBand fabric. It features hot swappable modules for switch, management, and power supply, and includes redundant power and management features.

High Performance Architecture
Based on the Mellanox InfiniScale device, the switch supports up to 96 10Gb/s (4X) InfiniBand ports in a single chassis and features InfiniScale’s low latency capabilities and the inherent Quality of Service (QoS) that InfiniBand links offer. Each port is capable of up to 20 Gb/sec of cross sectional bandwidth that realizes an unprecedented 1.92 Terabits of total bandwidth. The platform is based on an extensible architecture that is supported by future generations of the InfiniScale family of InfiniBand switching devices.

Modular, Bladed Architecture
This switch platform is another industry first and offers port density up to 96 ports in an 8-slot unit enclosure for optimal space utilization and flexibility. The core backbone switch capabilities allows a two-stage fat tree design to scale beyond four thousand nodes, which provides investment protection and a clear growth plan.

This new switch platform will be made available to end customers only through Mellanox’s OEM partners. Availability to our OEM partners is this month; our partners are expected to announce end user availability as soon as Q2, 2003.

About InfiniBand Architecture
InfiniBand Architecture is the only 10 Gb/sec ultra low latency clustering, communication, storage and embedded interconnect in the market today. InfiniBand, based on an industry standard, provides the most robust data center interconnect solution available with reliability, availability, serviceability and manageability features designed from the ground up. These parameters greatly reduce total cost of ownership for the data center. Low cost InfiniBand silicon that supports 10 Gb/sec RDMA transfers is shipping today providing eight times the bandwidth of Ethernet and three times the bandwidth of proprietary clustering interconnects. With an approved specification for 30 Gb/sec, InfiniBand is at least a generation ahead of competing fabric technologies today and in the foreseeable future.

About Mellanox
Mellanox is the leading supplier of InfiniBand semiconductors, providing a complete solution including switches, host channel adapters, and target channel adapters to the server, communications, data storage, and embedded markets. Mellanox Technologies has delivered over 50,000 ports of InfiniBand over two generations of 10Gb/sec InfiniBand devices including the InfiniBridge, InfiniScale and InfiniHost devices. The company has strong backing from corporate investors including Dell, IBM, Intel Capital, Sun Microsystems, and Vitesse as well as, strong venture backing from Bessemer Venture Partners, Raza Venture Management, Sequoia Capital, US Venture Partners, and others. Mellanox has been recognized with awards in 2001 and 2002 from Computerworld, Network Computing, Red Herring, and Upside magazines as a key emerging technology company. The company has major offices located in Santa Clara, CA, Yokneam and Tel Aviv Israel. For more information on Mellanox, visit www.mellanox.com.

Mellanox, InfiniBridge, InfiniHost and InfiniScale are registered trademarks of Mellanox Technologies, Inc. InfiniBand (TM/SM) is a trademark and service mark of the InfiniBand Trade Association.

Previous Page | News by Category | News Search

If you found this page useful, bookmark and share it on: