background image

by the hPC Challenge Benchmark suite, is nearly identical to its ping-pong 
latency, even as you increase the number of nodes.

the InfiniPath adapter, using a standard Linux distribution, also achieves the 
lowest tCP/IP latency and outstanding bandwidth.

3

 eliminating the excess 

latency found in traditional interconnects reduces communications wait time 
and allows processors to spend more time computing, which results in ap-
plications that run faster and scale higher.

Lowest  CPU  Utilization.

 the  InfiniPath  connectionless  environment  elimi-

nates  overhead  that  wastes  valuable  CPU  cycles.  It  provides  reliable  data 
transmission  without  the  vast  resources  required  by  connection-oriented 
adapters, thus increasing the efficiency of your clustered systems.

Built on Industry Standards.

 the InfiniPath adapter supports a rich com-

bination  of  open  standards  to  achieve  industry-leading  performance.  the 
InfiniPath OpenIB software stack has been proven to be the highest perfor-
mance implementation of the OpenIB Verbs layer, which yields both superior 
latency and bandwidth compared to other InfiniBand alternatives. 

• 

InfiniBand 1.1 4X Compliant

• 

standard InfiniBand fabric management

• 

MPI 1.2 with MPICh 1.2.6

• 

OpenIB supporting IPoIB, sDP, UDP and sRP

• 

PCI express x8 expansion slot Compatible

• 

supports sUse, Red hat, and Fedora Core Linux

D a t a s h e e t

© 2006 QLogic Corporation. all rights reserved. QLogic, the QLogic Logo, Pathscale, InfiniPath, and accelerating Cluster Performance are registered trademarks or trademarks of QLogic. hypertransport and htX are licensed trademarks of the hypertransport technology 
association. aMD, the aMD arrow logo, aMD Opteron and combinations thereof are trademarks of advanced Micro Devices, Inc. Other trademarks are the property of their respective owners.

sN0058045-00 Rev D 11/06

Corporate headquarters

    QLogic Corporation    26650 aliso Viejo Parkway    aliso Viejo, Ca 92656    949.389.6000

europe headquarters 

  QLogic (UK) LtD.    surrey technology Centre    40 Occam Road Guildford    surrey GU2 7YG UK    +440(0)1483 295825

InfiniPath QHT7140

 

    HyperTransport Interface

• 

ht v1.0.3 compliant

• 

htX slot compliant

• 

6.4 GB/s bandwidth

•  

asIC supports a tunnel configuration with  
upstream and downstream ports at 16 bits  
@1.6 Gt/s

     Connectivity

• 

 single InfiniBand 4X port (10+10 Gbps) –  
Copper

• 

 external fiber optic media adapter module 
support

• 

 Compatible with managed InfiniBand switches  
from Cisco®, silverstorm™, Mellanox®, and  
Voltaire®

•  

Interoperable with host channel adapters (hCas) 
from Cisco, silverstorm, Mellanox and Voltaire run-
ning the Open Fabrics software stack

     

QLogic Host Driver/Upper level 

     Protocol (ULP) Support

• 

MPICh version 1.2.6 

• 

 tCP, NFs, UDP, sOCKets through ethernet driver 
emulation

• 

Optimized MPI protocol stack supplied

• 

32- and 64-bit application ready

• 

 sDP, sRP, IPoIB supported through OpenFabrics 
stack

    

 InfiniBand Interfaces and specifications

• 

4X speed (10+10 Gbps)

•  

Uses standard IBta 1.1 compliant fabric and    
cables; Link layer compatible

• 

Configurable MtU size (4096 maximum)

• 

Integrated seRDes

      

Management Support

• 

 Includes InfiniBand 1.1 compliant sMa (subnet 
Management agent)

•  

Interoperable with management solutions from 
Cisco, silverstorm,, and Voltaire

      

Regulatory Compliance

• 

FCC Part 15, subpart B, Class a

• 

ICes-003, Class a

• 

eN 55022, Class a

• 

VCCI V-3/2004.4, Class a

    

 Operating Environments

• 

supports 64-bit Linux with 2.6.11 kernels

  - Red hat enterprise Linux 4.x
  - sUse Linux 9.3 & 10.0
  - Fedora Core 3 & Fedora 4

• 

Uses standard Linux tCP/IP stack

    

QLogic InfiniPath Adapter Specifications

• 

typical Power Consumption: 5 Watts

•  

available in PCI half height short and PCI full  
height short form factors

•   

Operating temperature: 10 to 45°C at 0-3km –  
30 to 60°C (Non-operating)

•  

humidity 20% to 80% (Non-condensing,  
Operating) 5% to 90% (Non-operating)

    

QLogic InfiniPath ASIC Specifications

• 

 hFCBGa package, 841 pin, 37.5 mm x 37.5 mm 
ball pitch

• 

330 signal I/Os

• 

4.1W typical (ht cave), 4.4W typical (ht tunnel)

• 

 Requires 1.2V and 2.5V supplies, plus interface 
reference voltages

1

 Ping-pong latency and uni-directional bandwidth were measured by Dr. D. K. Panda on 2.8 Ghz processors at Ohio state University using the standard OsU Ping-pong latency test.

2

  the n1⁄2 measurement was done with a single processor node communicating to a single processor node through a single level of switch.  When measured with 4 processor cores per node 

the n1⁄2 number was further improved to 88 bytes and 90% of the peak bandwidth was achieved with data packets of approximately 640 bytes.

3

   tCP/IP bandwidth and latency were measured using Netperf and a standard Linux tCP/IP software stack.

Note:  actual performance measurements may be improved over data published in this document.  all current performance data is available in the InfiniPath section of the QLogic website at 

www.qlogic.com/pathscale.

Reviews: