Europe Headquarters
QLogic (UK) LTD. Surrey Technology Centre 40 Occam Road Guildford Surrey GU2 7YG UK +44 (0)1483 295825
C A S E S T U D Y
Corporate Headquarters
QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 949.389.6000
www.qlogic.com
Cambridge University
Exam Time
With the cluster installed and fine-tuned, it was now time to test the
system using the LINPACK benchmark. The LINPACK benchmark is
a standard used by the Top500 List in order to measure a cluster’s
computing power. This is achieved by running a program to solve a
dense system of linear equations. The resulting number of MFlops,
GFlops, or TFlops (milions, billions, or trillions of floating point
operations per second) represents a good measure of the system’s
theoretical peak performance.
QLogic Makes the Grade
The LINPACK benchmark tests were run, and with 4 hours to spare,
the Cambridge system running on a QLogic InfiniBand network
yielded over 18 TFlops. This rate of performance was strong enough
to earn Cambridge the # 20 spot in the November Top500 List of the
world’s most powerful supercomputers. This ranking also puts the
Cambridge cluster at # 7 in all of Europe and # 2 in the UK. Just
as important to making an A+ supercomuter was a commodity price
tag. Cambridge turned to Dell and QLogic for a cost-effective high-
performance solution that makes the grade.
About High Performance Computing (HPC)
High Performance Computing (HPC) using clusters of inexpensive,
high-performing processors is being applied to solve some of the
most difficult computational problems in the world. HPC techniques
are used to tackle computationally challenging scientific work in
universities and government agencies around the world, as well as
in businesses where rigorous analytics are needed to model product
characteristics and simulate various scenarios. QLogic is a leader
in the HPC space with its portfolio of InfiniBand products, including
host channel adapters (HCAs) and multi-protocol fabric directors and
switches.
QLogic’s Industry Leading Performance
The InfiniPath InfiniBand host channel adapter yields the lowest
latency, the highest message rate and highest effective bandwidth
of any cluster interconnect available, enabling Cambridge to gain
maximum advantage and return from their investment in clustered
systems. In addition, SilverStorm InfiniBand switches and directors
by QLogic ensure that the overall fabric maintains a low latency, high
bandwidth, simple, yet highly scalable design with reduced operating
costs, as well as up to 1/3 less power requirements.
System name: “Darwin”
System family: Dell PowerEdge
Cluster
•
•
System model: PowerEdge 1955
Computer: PowerEdge 1950, 3 GHz
Interconnect: QLogic InfiniPath
•
•
•
Switched Fabric: QLogic SilverStorm
Processors: 2340
Rmax (GFlops): 18270
•
•
•
Rpeak (GFlops): 28080
Nmax: 71300
•
•
© 2007 QLogic Corporation. All rights reserved. QLogic, the QLogic Logo, the Powered by QLogic Logo, InfiniPath, and SilverStorm Technologies are registered trademarks or trademarks of QLogic Corporation. All other brands and product names are trademarks
or registered trademarks of their respective owners. Information supplied by QLogic is believed to be accurate and reliable. QLogic Corporation assumes no responsibility for any errors in this brochure. QLogic Corporation reserves the right, without notice,
to makes changes in product design or specifications.
SN0130912-00 Rev A 3/07
Cambridge “Darwin” HPC Cluster, with nine 65-node computational units (CU)