<< Chapter < Page Chapter >> Page >

To construct a DAG, the compiler takes each intermediate language tuple and maps it onto one or more nodes. For instance, those tuples that represent binary operations, such as addition ( X=A+B ), form a portion of the DAG with two inputs ( A and B ) bound together by an operation ( + ). The result of the operation may feed into yet other operations within the basic block (and the DAG) as shown in [link] .

A trivial data flow graph

This figure contains an equation, X = A + B, and a line connecting point A to point X, and point X to point B.

For a basic block of code, we build our DAG in the order of the instructions. The DAG for the previous four instructions is shown in [link] . This particular example has many dependencies, so there is not much opportunity for parallelism. [link] shows a more straightforward example shows how constructing a DAG can identify parallelism.

From this DAG, we can determine that instructions 1 and 2 can be executed in parallel. Because we see the computations that operate on the values A and B while processing instruction 4, we can eliminate a common subexpression during the construction of the DAG. If we can determine that Z is the only variable that is used outside this small block of code, we can assume the Y computation is dead code.

A more complex data flow graph

This figure contains equations, X = A + B, D = X * 17, A = B + C, and X = C + E, and to the right of these equations is a flowchart expressing these equations together.

By constructing the DAG, we take a sequence of instructions and determine which must be executed in a particular order and which can be executed in parallel. This type of data flow analysis is very important in the codegeneration phase on super-scalar processors. We have introduced the concept of dependencies and how to use data flow to find opportunities for parallelism in code sequences within a basic block. We can also use data flow analysis to identify dependencies, opportunities for parallelism, and dead code between basic blocks.

Uses and definitions

As the DAG is constructed, the compiler can make lists of variable uses and definitions , as well as other information, and apply these to global optimizations across many basic blocks taken together. Looking at the DAG in [link] , we can see that the variables defined are Z , Y , X , C , and D , and the variables used are A and B . Considering many basic blocks at once, we can say how far a particular variable definition reaches — where its value can be seen. From this we can recognize situations where calculations are being discarded, where two uses of a given variable are completely independent, or where we can overwrite register-resident values without saving them back to memory. We call this investigation data flow analysis .

Extracting parallelism from a dag

This figure contains equations, X = A + B, Y = B + 3, D = X * 7, C = A + B, and Z = D + C, with a flowchart to the right expressing the relationship between the equations.

To illustrate, suppose that we have the flow graph in [link] . Beside each basic block we’ve listed the variables it uses and the variables it defines. What can data flow analysis tell us?

Notice that a value for A is defined in block X but only used in block Y . That means that A is dead upon exit from block Y or immediately upon taking the right-hand branch leaving X ; none of the other basic blocks uses the value of A. That tells us that any associated resources, such as a register, can be freed for other uses.

Looking at [link] we can see that D is defined in basic block X , but never used. This means that the calculations defining D can be discarded.

Something interesting is happening with the variable G . Blocks X and W both use it, but if you look closely you’ll see that the two uses are distinct from one another, meaning that they can be treated as two independent variables.

A compiler featuring advanced instruction scheduling techniques might notice that W is the only block that uses the value for E , and so move the calculations defining E out of block Y and into W , where they are needed.

Flow graph for data flow analysis

This figure is a flowgraph of four rows, with a box in each row and arrows showing the relationship between the boxes, which are labeled, X, Y, W, and Z. To the right of the boxes, in their respective rows, are lists of letters under categories, Defines, and Uses.

In addition to gathering data about variables, the compiler can also keep information about subexpressions. Examining both together, it can recognize cases where redundant calculations are being made (across basic blocks), and substitute previously computed values in their place. If, for instance, the expression H*I appears in blocks X , Y , and W , it could be calculated just once in block X and propagated to the others that use it.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, High performance computing. OpenStax CNX. Aug 25, 2010 Download for free at http://cnx.org/content/col11136/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'High performance computing' conversation and receive update notifications?

Ask