IA-32 Intel® Architecture Optimization
3-18
Here,
fvec.h
is the class definition file and
F32vec4
is the class
representing an array of four floats. The “+” and “=” operators are
overloaded so that the actual Streaming SIMD Extensions
implementation in the previous example is abstracted out, or hidden,
from the developer. Note how much more this resembles the original
code, allowing for simpler and faster programming.
Again, the example is assuming the arrays, passed to the routine, are
already aligned to 16-byte boundary.
Automatic Vectorization
The Intel C++ Compiler provides an optimization mechanism by which
loops, such as in Example 3-8 can be automatically vectorized, or
converted into Streaming SIMD Extensions code. The compiler uses
similar techniques to those used by a programmer to identify whether a
loop is suitable for conversion to SIMD. This involves determining
whether the following might prevent vectorization:
•
the layout of the loop and the data structures used
•
dependencies amongst the data accesses in each iteration and across
iterations
Once the compiler has made such a determination, it can generate
vectorized code for the loop, allowing the application to use the SIMD
instructions.
Example 3-11 C++ Code Using the Vector Classes
#include <fvec.h>
void add(float *a, float *b, float *c)
{
F32vec4 *av=(F32vec4 *) a;
F32vec4 *bv=(F32vec4 *) b;
F32vec4 *cv=(F32vec4 *) c;
*cv=*av + *bv;
}
Summary of Contents for ARCHITECTURE IA-32
Page 1: ...IA 32 Intel Architecture Optimization Reference Manual Order Number 248966 013US April 2006...
Page 220: ...IA 32 Intel Architecture Optimization 3 40...
Page 434: ...IA 32 Intel Architecture Optimization 9 20...
Page 514: ...IA 32 Intel Architecture Optimization B 60...
Page 536: ...IA 32 Intel Architecture Optimization C 22...