Automatic vectorization

Last updated

Automatic vectorization, in parallel computing, is a special case of automatic parallelization, where a computer program is converted from a scalar implementation, which processes a single pair of operands at a time, to a vector implementation, which processes one operation on multiple pairs of operands at once. For example, modern conventional computers, including specialized supercomputers, typically have vector operations that simultaneously perform operations such as the following four additions (via SIMD or SPMD hardware):

Contents

However, in most programming languages one typically writes loops that sequentially perform additions of many numbers. Here is an example of such a loop, written in C:

for(i=0;i<n;i++)c[i]=a[i]+b[i];

A vectorizing compiler transforms such loops into sequences of vector operations. These vector operations perform additions on blocks of elements from the arrays a, b and c. Automatic vectorization is a major research topic in computer science.[ citation needed ]

Background

Early computers usually had one logic unit, which executed one instruction on one pair of operands at a time. Computer languages and programs therefore were designed to execute in sequence. Modern computers, though, can do many things at once. So, many optimizing compilers perform automatic vectorization, where parts of sequential programs are transformed into parallel operations.

Loop vectorization transforms procedural loops by assigning a processing unit to each pair of operands. Programs spend most of their time within such loops. Therefore, vectorization can significantly accelerate them, especially over large data sets. Loop vectorization is implemented in Intel's MMX, SSE, and AVX, in Power ISA's AltiVec, and in ARM's NEON, SVE and SVE2 instruction sets.

Many constraints prevent or hinder vectorization. Sometimes vectorization can slow down execution, for example because of pipeline synchronization or data-movement timing. Loop dependence analysis identifies loops that can be vectorized, relying on the data dependence of the instructions inside loops.

Guarantees

Automatic vectorization, like any loop optimization or other compile-time optimization, must exactly preserve program behavior.

Data dependencies

All dependencies must be respected during execution to prevent incorrect results.

In general, loop invariant dependencies and lexically forward dependencies can be easily vectorized, and lexically backward dependencies can be transformed into lexically forward dependencies. However, these transformations must be done safely, in order to ensure that the dependence between all statements remain true to the original.

Cyclic dependencies must be processed independently of the vectorized instructions.

Data precision

Integer precision (bit-size) must be kept during vector instruction execution. The correct vector instruction must be chosen based on the size and behavior of the internal integers. Also, with mixed integer types, extra care must be taken to promote/demote them correctly without losing precision. Special care must be taken with sign extension (because multiple integers are packed inside the same register) and during shift operations, or operations with carry bits that would otherwise be taken into account.

Floating-point precision must be kept as well, unless IEEE-754 compliance is turned off, in which case operations will be faster but the results may vary slightly. Big variations, even ignoring IEEE-754 usually signify programmer error.

Theory

To vectorize a program, the compiler's optimizer must first understand the dependencies between statements and re-align them, if necessary. Once the dependencies are mapped, the optimizer must properly arrange the implementing instructions changing appropriate candidates to vector instructions, which operate on multiple data items.

Building the dependency graph

The first step is to build the dependency graph, identifying which statements depend on which other statements. This involves examining each statement and identifying every data item that the statement accesses, mapping array access modifiers to functions and checking every access' dependency to all others in all statements. Alias analysis can be used to certify that the different variables access (or intersect) the same region in memory.

The dependency graph contains all local dependencies with distance not greater than the vector size. So, if the vector register is 128 bits, and the array type is 32 bits, the vector size is 128/32 = 4. All other non-cyclic dependencies should not invalidate vectorization, since there won't be any concurrent access in the same vector instruction.

Suppose the vector size is the same as 4 ints:

for(i=0;i<128;i++){a[i]=a[i-16];// 16 > 4, safe to ignorea[i]=a[i-1];// 1 < 4, stays on dependency graph}

Clustering

Using the graph, the optimizer can then cluster the strongly connected components (SCC) and separate vectorizable statements from the rest.

For example, consider a program fragment containing three statement groups inside a loop: (SCC1+SCC2), SCC3 and SCC4, in that order, in which only the second group (SCC3) can be vectorized. The final program will then contain three loops, one for each group, with only the middle one vectorized. The optimizer cannot join the first with the last without violating statement execution order, which would invalidate the necessary guarantees.

Detecting idioms

Some non-obvious dependencies can be further optimized based on specific idioms.

For instance, the following self-data-dependencies can be vectorized because the value of the right-hand values (RHS) are fetched and then stored on the left-hand value, so there is no way the data will change within the assignment.

a[i]=a[i]+a[i+1];

Self-dependence by scalars can be vectorized by variable elimination.

General framework

The general framework for loop vectorization is split into four stages:

Run-time vs. compile-time

Some vectorizations cannot be fully checked at compile time. For example, library functions can defeat optimization if the data they process is supplied by the caller. Even in these cases, run-time optimization can still vectorize loops on-the-fly.

This run-time check is made in the prelude stage and directs the flow to vectorized instructions if possible, otherwise reverts to standard processing, depending on the variables that are being passed on the registers or scalar variables.

The following code can easily be vectorized at compile time, as it doesn't have any dependence on external parameters. Also, the language guarantees that neither will occupy the same region in memory as any other variable, as they are local variables and live only in the execution stack.

inta[128];intb[128];// initialize bfor(i=0;i<128;i++)a[i]=b[i]+5;

On the other hand, the code below has no information on memory positions, because the references are pointers and the memory they point to may overlap.

voidcompute(int*a,int*b){inti;for(i=0;i<128;i++,a++,b++)*a=*b+5;}

A quick run-time check on the address of both a and b, plus the loop iteration space (128) is enough to tell if the arrays overlap or not, thus revealing any dependencies.

There exist some tools to dynamically analyze existing applications to assess the inherent latent potential for SIMD parallelism, exploitable through further compiler advances and/or via manual code changes. [1]

Techniques

An example would be a program to multiply two vectors of numeric data. A scalar approach would be something like:

for(i=0;i<1024;i++)c[i]=a[i]*b[i];

This could be vectorized to look something like:

for(i=0;i<1024;i+=4)c[i:i+3]=a[i:i+3]*b[i:i+3];

Here, c[i:i+3] represents the four array elements from c[i] to c[i+3] and the vector processor can perform four operations for a single vector instruction. Since the four vector operations complete in roughly the same time as one scalar instruction, the vector approach can run up to four times faster than the original code.

There are two distinct compiler approaches: one based on the conventional vectorization technique and the other based on loop unrolling.

Loop-level automatic vectorization

This technique, used for conventional vector machines, tries to find and exploit SIMD parallelism at the loop level. It consists of two major steps as follows.

  1. Find an innermost loop that can be vectorized
  2. Transform the loop and generate vector codes

In the first step, the compiler looks for obstacles that can prevent vectorization. A major obstacle for vectorization is true data dependency shorter than the vector length. Other obstacles include function calls and short iteration counts.

Once the loop is determined to be vectorizable, the loop is stripmined by the vector length and each scalar instruction within the loop body is replaced with the corresponding vector instruction. Below, the component transformations for this step are shown using the above example.

for(i=0;i<1024;i+=4)for(j=0;j<4;j++)c[i+j]=a[i+j]*b[i+j];
for(i=0;i<1024;i+=4){for(j=0;j<4;j++)tA[j]=A[i+j];for(j=0;j<4;j++)tB[j]=B[i+j];for(j=0;j<4;j++)tC[j]=tA[j]*tB[j];for(j=0;j<4;j++)C[i+j]=tC[j];}
for(i=0;i<1024;i+=4){vA=vec_ld(&A[i]);vB=vec_ld(&B[i]);vC=vec_mul(vA,vB);vec_st(vC,&C[i]);}

Basic block level automatic vectorization

This relatively new technique specifically targets modern SIMD architectures with short vector lengths. [2] Although loops can be unrolled to increase the amount of SIMD parallelism in basic blocks, this technique exploits SIMD parallelism within basic blocks rather than loops. The two major steps are as follows.

  1. The innermost loop is unrolled by a factor of the vector length to form a large loop body.
  2. Isomorphic scalar instructions (that perform the same operation) are packed into a vector instruction if dependencies do not prevent doing so.

To show step-by-step transformations for this approach, the same example is used again.

for(i=0;i<1024;i+=4){sA0=ld(&A[i+0]);sB0=ld(&B[i+0]);sC0=sA0*sB0;st(sC0,&C[i+0]);...sA3=ld(&A[i+3]);sB3=ld(&B[i+3]);sC3=sA3*sB3;st(sC3,&C[i+3]);}
for(i=0;i<1024;i+=4){(sA0,sA1,sA2,sA3)=ld(&A[i+0:i+3]);(sB0,sB1,sB2,sB3)=ld(&B[i+0:i+3]);(sC0,sC1,sC2,sC3)=(sA0,sA1,sA2,sA3)*(sB0,sB1,sB2,sB3);st((sC0,sC1,sC2,sC3),&C[i+0:i+3]);}
for(i=0;i<1024;i+=4){vA=vec_ld(&A[i]);vB=vec_ld(&B[i]);vC=vec_mul(vA,vB);vec_st(vC,&C[i]);}

Here, sA1, sB1, ... represent scalar variables and vA, vB, and vC represent vector variables.

Most automatically vectorizing commercial compilers use the conventional loop-level approach except the IBM XL Compiler, [3] which uses both.

In the presence of control flow

The presence of if-statements in the loop body requires the execution of instructions in all control paths to merge the multiple values of a variable. One general approach is to go through a sequence of code transformations: predication → vectorization(using one of the above methods) → remove vector predicates → remove scalar predicates. [4] If the following code is used as an example to show these transformations;

for(i=0;i<1024;i++)if(A[i]>0)C[i]=B[i];elseD[i]=D[i-1];
for(i=0;i<1024;i++){P=A[i]>0;NP=!P;C[i]=B[i];(P)D[i]=D[i-1];(NP)}

where (P) denotes a predicate guarding the statement.

for(i=0;i<1024;i+=4){vP=A[i:i+3]>(0,0,0,0);vNP=vec_not(vP);C[i:i+3]=B[i:i+3];(vP)(NP1,NP2,NP3,NP4)=vNP;D[i+3]=D[i+2];(NP4)D[i+2]=D[i+1];(NP3)D[i+1]=D[i];(NP2)D[i]=D[i-1];(NP1)}
for(i=0;i<1024;i+=4){vP=A[i:i+3]>(0,0,0,0);vNP=vec_not(vP);C[i:i+3]=vec_sel(C[i:i+3],B[i:i+3],vP);(NP1,NP2,NP3,NP4)=vNP;D[i+3]=D[i+2];(NP4)D[i+2]=D[i+1];(NP3)D[i+1]=D[i];(NP2)D[i]=D[i-1];(NP1)}
for(i=0;i<1024;i+=4){vP=A[i:i+3]>(0,0,0,0);vNP=vec_not(vP);C[i:i+3]=vec_sel(C[i:i+3],B[i:i+3],vP);(NP1,NP2,NP3,NP4)=vNP;if(NP4)D[i+3]=D[i+2];if(NP3)D[i+2]=D[i+1];if(NP2)D[i+1]=D[i];if(NP1)D[i]=D[i-1];}

Reducing vectorization overhead in the presence of control flow

Having to execute the instructions in all control paths in vector code has been one of the major factors that slow down the vector code with respect to the scalar baseline. The more complex the control flow becomes and the more instructions are bypassed in the scalar code, the larger the vectorization overhead becomes. To reduce this vectorization overhead, vector branches can be inserted to bypass vector instructions similar to the way scalar branches bypass scalar instructions. [5] Below, AltiVec predicates are used to show how this can be achieved.

for(i=0;i<1024;i++){if(A[i]>0){C[i]=B[i];if(B[i]<0)D[i]=E[i];}}
for(i=0;i<1024;i+=4){vPA=A[i:i+3]>(0,0,0,0);C[i:i+3]=vec_sel(C[i:i+3],B[i:i+3],vPA);vT=B[i:i+3]<(0,0,0,0);vPB=vec_sel((0,0,0,0),vT,vPA);D[i:i+3]=vec_sel(D[i:i+3],E[i:i+3],vPB);}
for(i=0;i<1024;i+=4){if(vec_any_gt(A[i:i+3],(0,0,0,0))){vPA=A[i:i+3]>(0,0,0,0);C[i:i+3]=vec_sel(C[i:i+3],B[i:i+3],vPA);vT=B[i:i+3]<(0,0,0,0);vPB=vec_sel((0,0,0,0),vT,vPA);if(vec_any_ne(vPB,(0,0,0,0)))D[i:i+3]=vec_sel(D[i:i+3],E[i:i+3],vPB);}}

There are two things to note in the final code with vector branches; First, the predicate defining instruction for vPA is also included within the body of the outer vector branch by using vec_any_gt. Second, the profitability of the inner vector branch for vPB depends on the conditional probability of vPB having false values in all fields given vPA has false values in all fields.

Consider an example where the outer branch in the scalar baseline is always taken, bypassing most instructions in the loop body. The intermediate case above, without vector branches, executes all vector instructions. The final code, with vector branches, executes both the comparison and the branch in vector mode, potentially gaining performance over the scalar baseline.

Manual vectorization

In most C and C++ compilers, it is possible to use intrinsic functions to manually vectorise, at the expense of programmer effort and maintainability.

See also

Related Research Articles

In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption.

<span class="mw-page-title-main">Single instruction, multiple data</span> Type of parallel processing

Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal and it can be directly accessible through an instruction set architecture (ISA), but it should not be confused with an ISA. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously.

AltiVec is a single-precision floating point and integer SIMD instruction set designed and owned by Apple, IBM, and Freescale Semiconductor — the AIM alliance. It is implemented on versions of the PowerPC processor architecture, including Motorola's G4, IBM's G5 and POWER6 processors, and P.A. Semi's PWRficient PA6T. AltiVec is a trademark owned solely by Freescale, so the system is also referred to as Velocity Engine by Apple and VMX by IBM and P.A. Semi.

In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called vectors. This is in contrast to scalar processors, whose instructions operate on single data items only, and in contrast to some of those same scalar processors having additional single instruction, multiple data (SIMD) or SWAR Arithmetic Units. Vector processors can greatly improve performance on certain workloads, notably numerical simulation and similar tasks. Vector processing techniques also operate in video-game console hardware and in graphics accelerators.

In geometry, a solid angle is a measure of the amount of the field of view from some particular point that a given object covers. That is, it is a measure of how large the object appears to an observer looking from that point. The point from which the object is viewed is called the apex of the solid angle, and the object is said to subtend its solid angle at that point.

<span class="mw-page-title-main">Linear independence</span> Vectors whose linear combinations are nonzero

In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.

Unit quaternions, known as versors, provide a convenient mathematical notation for representing spatial orientations and rotations of elements in three dimensional space. Specifically, they encode information about an axis-angle rotation about an arbitrary axis. Rotation and orientation quaternions have applications in computer graphics, computer vision, robotics, navigation, molecular dynamics, flight dynamics, orbital mechanics of satellites, and crystallographic texture analysis.

In computer architecture, predication is a feature that provides an alternative to conditional transfer of control, as implemented by conditional branch machine instructions. Predication works by having conditional (predicated) non-branch instructions associated with a predicate, a Boolean value used by the instruction to control whether the instruction is allowed to modify the architectural state or not. If the predicate specified in the instruction is true, the instruction modifies the architectural state; otherwise, the architectural state is unchanged. For example, a predicated move instruction will only modify the destination if the predicate is true. Thus, instead of using a conditional branch to select an instruction or a sequence of instructions to execute based on the predicate that controls whether the branch occurs, the instructions to be executed are associated with that predicate, so that they will be executed, or not executed, based on whether that predicate is true or false.

SSE2 is one of the Intel SIMD processor supplementary instruction sets introduced by Intel with the initial version of the Pentium 4 in 2000. It extends the earlier SSE instruction set, and is intended to fully replace MMX. Intel extended SSE2 to create SSE3 in 2004. SSE2 added 144 new instructions to SSE, which has 70 instructions. Competing chip-maker AMD added support for SSE2 with the introduction of their Opteron and Athlon 64 ranges of AMD64 64-bit CPUs in 2003.

In compiler theory, dead-code elimination is a compiler optimization to remove dead code. Removing such code has several benefits: it shrinks program size, an important consideration in some contexts, it reduces resource usage such as the number of bytes to be transferred and it allows the running program to avoid executing irrelevant operations, which reduces its running time. It can also enable further optimizations by simplifying program structure. Dead code includes code that can never be executed, and code that only affects dead variables, that is, irrelevant to the program.

In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.

Loop unrolling, also known as loop unwinding, is a loop transformation technique that attempts to optimize a program's execution speed at the expense of its binary size, which is an approach known as space–time tradeoff. The transformation can be undertaken manually by the programmer or by an optimizing compiler. On modern processors, loop unrolling is often counterproductive, as the increased code size can cause more cache misses; cf. Duff's device.

In computer science, instruction scheduling is a compiler optimization used to improve instruction-level parallelism, which improves performance on machines with instruction pipelines. Put more simply, it tries to do the following without changing the meaning of the code:

In differential geometry, the four-gradient is the four-vector analogue of the gradient from vector calculus.

In compiler theory, loop optimization is the process of increasing execution speed and reducing the overheads associated with loops. It plays an important role in improving cache performance and making effective use of parallel processing capabilities. Most execution time of a scientific program is spent on loops; as such, many compiler optimization techniques have been developed to make them faster.

In compiler theory, dependence analysis produces execution-order constraints between statements/instructions. Broadly speaking, a statement S2 depends on S1 if S1 must be executed before S2. Broadly, there are two classes of dependencies--control dependencies and data dependencies.

In computer science, loop dependence analysis is a process which can be used to find dependencies within iterations of a loop with the goal of determining different relationships between statements. These dependent relationships are tied to the order in which different statements access memory locations. Using the analysis of these relationships, execution of the loop can be organized to allow multiple processors to work on different portions of the loop in parallel. This is known as parallel processing. In general, loops can consume a lot of processing time when executed as serial code. Through parallel processing, it is possible to reduce the total execution time of a program through sharing the processing load among multiple processors.

In computer science, software pipelining is a technique used to optimize loops, in a manner that parallels hardware pipelining. Software pipelining is a type of out-of-order execution, except that the reordering is done by a compiler instead of the processor. Some computer architectures have explicit support for software pipelining, notably Intel's IA-64 architecture.

Expression templates are a C++ template metaprogramming technique that builds structures representing a computation at compile time, where expressions are evaluated only as needed to produce efficient code for the entire computation. Expression templates thus allow programmers to bypass the normal order of evaluation of the C++ language and achieve optimizations such as loop fusion.

For several years parallel hardware was only available for distributed computing but recently it is becoming available for the low end computers as well. Hence it has become inevitable for software programmers to start writing parallel applications. It is quite natural for programmers to think sequentially and hence they are less acquainted with writing multi-threaded or parallel processing applications. Parallel programming requires handling various issues such as synchronization and deadlock avoidance. Programmers require added expertise for writing such applications apart from their expertise in the application domain. Hence programmers prefer to write sequential code and most of the popular programming languages support it. This allows them to concentrate more on the application. Therefore, there is a need to convert such sequential applications to parallel applications with the help of automated tools. The need is also non-trivial because large amount of legacy code written over the past few decades needs to be reused and parallelized.

References

  1. Holewinski, Justin; Ramamurthi, Ragavendar; Ravishankar, Mahesh; Fauzia, Naznin; Pouchet, Louis-Noël; Rountev, Atanas; Sadayappan, P. (6 August 2012). "Dynamic trace-based analysis of vectorization potential of applications". ACM SIGPLAN Notices. 47 (6): 371–382. doi:10.1145/2345156.2254108.
  2. Larsen, S.; Amarasinghe, S. (2000). "Exploiting superword level parallelism with multimedia instruction sets". Proceedings of the ACM SIGPLAN conference on Programming language design and implementation. ACM SIGPLAN Notices. 35 (5): 145–156. doi: 10.1145/358438.349320 . hdl: 1721.1/86445 .
  3. "Code Optimization with IBM XL Compilers" (PDF). June 2004. Archived from the original (PDF) on 2010-06-10.
  4. Shin, J.; Hall, M. W.; Chame, J. (2005). "Superword-Level Parallelism in the Presence of Control Flow". Proceedings of the international symposium on Code generation and optimization. pp. 165–175. doi:10.1109/CGO.2005.33. ISBN   0-7695-2298-X.
  5. Shin, J. (2007). "Introducing Control Flow into Vectorized Code". Proceedings of the 16th International Conference on Parallel Architecture and Compilation Techniques. pp. 280–291. doi:10.1109/PACT.2007.41.