7/23/2002 - Mellanox Technologies, Ltd., the leading provider of InfiniBandSM silicon, announced the availability of its Nitro II 4X (10 Gb/sec) InfiniBand server blade reference platform. The Nitro II platform utilizes Mellanox’s second-generation 10 Gb/sec InfiniHostTM host channel adaptor (HCA) and InfiniScaleTM switching silicon. The Nitro II platform consists of 2.2 GHz Intel® Pentium 4 processor based diskless server blades, dual 16-port 4X switches, and a 10 Gb/sec backplane supporting 480 Gb/sec of switching capacity in a compact 14 blade chassis. The combination of high performance processors, superb memory capacity and second-generation InfiniBand silicon offers OEMs and developers an ideal 10 Gb/sec development platform for optimizing the performance of clustered databases and other data center applications.
"InfiniBand diskless server blades create a whole new class of data center solutions that provide two key server improvements. First, Nitro II blades provides more than 3 times the existing CPU performance available through the use of a 2.2 GHz Intel processor versus the 700 or 800 MHz speeds offered in current blade technologies," said Yuval Leader vice president of system solutions, Mellanox Technologies. "Secondly, the InfiniBand architecture enables CPU, I/O, and storage sharing across all data center systems; rather than the duplication and isolation of these essential resources in each and every server or server blade. This allows data center managers the ability to scale, provision or redeploy CPU, I/O or storage resources individually, on an as-needed basis."
Nitro II, like Mellanox’s first-generation InfiniBand server blade reference design released in January 2002, provides unprecedented levels of integration, ease of use, performance, I/O sharing and management capabilities delivering lower Total Cost of Ownership (TCO) benefits for enterprise and Internet data centers. New features for Nitro II include Mellanox’s advanced InfiniHost HCA, an Intel Pentium 4 processor, expanded memory capabilities and a 10 Gb/sec switching backplane. This platform provides a huge leap in performance capabilities that is ideal for improving application and failover performance for clustered databases by leveraging InfiniBand’s high bandwidth, low latency RDMA capabilities.
"Mellanox is again providing leadership by utilizing industry standard server components that demonstrate the winning combination of InfiniBand and high performance server blades," said John Humphreys, IDC Senior Research Analyst Global Enterprise Server Solutions. "IDC sees a tremendous market for server blades and projects by 2006 that over 1 ½ million servers will be in a blade format. IDC believes that the InfiniBand architecture has a distinct opportunity to play a key role in the development of server blades."
"IBM DB2 database software running on Mellanox silicon in the Nitro II platform will help customers achieve a low total cost of database ownership by delivering high performance and scalability on the InfiniBand architecture," said Lauren Flaherty, vice president of marketing, IBM Data Management Solutions.
"InfiniBand connectivity will emerge in Intel Architecture platforms early next year, with blade servers as an important initial implementation. Reference designs like Mellanox’s Nitro II Platform will help accelerate InfiniBand architecture based blade delivery," said Jim Pappas, director of initiative marketing, Intel Enterprise Platform Group. "We look forward to working closely with Mellanox in delivering InfiniBand capability to our server platforms."
Nitro II InfiniBand Architecture
The Nitro II server blades are based on the Mellanox second-generation InfiniHost HCA, an Intel 2.2 GHz Pentium 4 processor and the ServerWorks Grand Champion chipset. The server blades support up to 4 GB of memory and are both diskless and headless (no video monitor required). Mellanox’s InfiniHost low latency hardware transport overcomes the latency and bandwidth penalties of LAN based remote storage, therefore eliminating the need for local storage on the server blade. Remote booting capabilities allow InfiniBand server blades to access all OS, applications and other software images from either NAS or SAN storage. In addition, the absence of a local disk improves reliability, lowers cost and enables more power for improved CPU and memory performance.
Dual 16-port 10Gb/sec switch blades offer a combined throughput of 640 Gb/sec. The switch aggregates twelve 4X ports from the backplane to four 4X uplink ports on the front of the chassis. The four 10Gb/sec uplink ports can be used to connect multiple chassis’ together to create large clusters of server, I/O or storage blades.
The passive backplane utilizes a dual star configuration to link 12 server or I/O slots through redundant InfiniBand fabric switches. The 24 total 4X InfiniBand backplane connections can carry 20 Gb/sec each allowing for a total aggregate bandwidth of 480 Gb/sec. The compact 4U chassis enables a total of up to 96 server blades in a single rack. The InfiniBand fabric also provides dedicated management lanes for chassis and baseboard management, and support for keyboard, mouse, power and management traffic; thus greatly reducing the number of cables required for server clusters.
In addition to providing a high performance data center computing platform, the reference design serves as a software development platform for OEM products, as well as, the PICMG 3.2 initiative which defines InfiniBand links as the standard interconnect for the next generation of server, storage and telecom applications.
Complete Product Development Kit
Mellanox is offering customers a complete Product Development Kit (PDK) for the Nitro II platform. The PDK includes a 16-port Nitrex II switch, Nitro II Server Blade, and complete chassis system with integrated backplane, power supply and fans. All PDKs includes schematics, layout, bill of materials, and a software development kit (SDK). The SDK contains driver development code, InfiniBand Architecture Verbs implementation, application examples, and debug/development tools, enabling customers to develop InfiniBand systems based on the reference software.
Pricing and Availability
The Nitro InfiniBand architecture reference chassis platform is available to OEM partners, as well as, select universities and laboratories that wish to explore the realm of 10 Gb/sec clustering for performance computing. The 16-port 4X Nitrex II switch is priced at $15,000, the Nitro II InfiniBand server blade is $6500, and the InfiniBand passive backplane and chassis is $8,500. OEM customer shipments will begin in August.
About Mellanox Technologies
Mellanox is the leading supplier of InfiniBand semiconductors, providing switches, Host Channel Adapters, and Target Channel Adapters to the server, communications and data storage markets. In January 2001, Mellanox Technologies delivered the InfiniBridge, the first 1X/4X InfiniBand device to market, and is now shipping second-generation InfiniScale and InfiniHost silicon. The company has raised more than $89 million to date and has strong corporate and venture backing from Bessemer Venture Partners, Dell Computer, Intel Capital, Raza Venture Management, Sequoia Capital, Sun Microsystems, US Venture Partners, Vitesse and others. In 2001 and 2002 Mellanox has been recognized with awards by Computerworld, Network Computing, Red Herring, and Upside magazines as a key emerging technology company. Mellanox currently has more than 200 employees in multiple sites worldwide. The company’s sales and marketing are headquartered in Santa Clara, CA; with engineering and operations based in Israel. For more information on Mellanox, visit www.mellanox.com
* Mellanox, InfiniBridge, InfiniHost and InfiniScale are registered trademarks of Mellanox Technologies, Inc.
* InfiniBand (TM/SM) is a trademark and service mark of the InfiniBand Trade Association.
* Intel and Pentium are trademarks of Intel Corporation
For Mellanox Technologies
Vice President, Product Marketing
Mellanox Technologies, Inc.
408-970-3400 x 302
Previous Page | News by Category | News Search
If you found this page useful, bookmark and share it on: