• Keine Ergebnisse gefunden

Massively Parallel Algorithms

N/A
N/A
Protected

Academic year: 2021

Aktie "Massively Parallel Algorithms"

Copied!
37
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Massively Parallel Algorithms

Dynamic Parallelism

G. Zachmann

University of Bremen, Germany

cgvr.cs.uni-bremen.de

(2)

§

Kernels may launch other kernels, i.e., a thread may instantiate a new and separate grid of threads

§

Potential benefits:

§ Flow control and kernel scheduling can be performed on the GPU - I.e., task parallelism on the GPU (a grid = execution of a task)

§ Allows recursion and subdivision of the problem domain, i.e., dynamic load balancing

Too coarse in some places Too fine in some places Adaptively balanced

(3)

Mapping Compute to the Problem

(4)

§

Syntax is exactly the same as on host side:

void main() {

cudaMalloc( &data, .. );

initialize( data );

A <<< ... >>> ( data );

B <<< ... >>> ( data );

C <<< ... >>> ( data );

cudaDeviceSynchronize();

retrieve( data );

}

__global__

void B( float * data ) { do_stuff_with( data );

X <<< ... >>> ( data );

Y <<< ... >>> ( data );

Z <<< ... >>> ( data );

cudaDeviceSynchronize();

do_more_stuff( data );

} Kernels called

from host

Other kernels called from device

(5)

Compilation

§

Need to add flags & libraries to enable dynamic parallelism

§ Might be the default anyway

§

One invocation (compile + link):

§

Separate compile/link invocations:

§ rdc = relocatable device code

§ cudadevrt = CUDA 5+ runtime library on the device

% nvcc –arch=sm_35 –rdc=true myprog.cu –lcudadevrt –o myprog

% nvcc –arch=sm_35 –rdc=true myprog.cu –o myprog.o

% nvcc –arch=sm_35 myprog.cu –lcudadevrt –o myprog

(6)

§

A kernel launch within a kernel is asynchronous but nested

§ Parent kernel/grid continues directly after the child kernel launch - Just like host code continues after kernel launch

§ Child kernel/grid is guaranteed to be finished before the parent kernel/grid returns

§ Child kernels inherit the shared memory configuration of the parent

Grid A (parent)

Grid B (child) Grid B launch

Grid A threads

Grid B completes Host (CPU) thread

Grid A launch

Time

Grid B threads

Grid A complete

(7)

Simple Recursion Example: Quicksort

§

Typical divide-and-conquer algorithm

§

Recursively partition and sort data

§

Entirely data-dependent execution

§

Therefore, notoriously hard to do efficiently with data- independent algorithms (e.g., for older Fermi architecture)

Example 2: Parallel Recursion

Simple example: Quicksort

Typical divide-and-conquer algorithm Recursively partition-and-sort data Entirely data-dependent execution

Notoriously hard to do efficiently on Fermi

3 2 2 6 3 9 1 4 5 8 1 8 7 9 2 5

3

2 1 9 4 3 6

2 1 2 5 8 8 7 9 5

2

1 1 2 2 5 4 5 3 8 6 8 7 3 9 9

3 4 3 5 8 6 8 7 5

1 1 2 2 2 3 3 4 5 5 6 7 8 8 9 9

eventually...

(8)

__global__ void qsort( float * data, int l, int r ) {

// Partition data around pivot value int pivot = data[l];

int cut_index; // index where to "cut" data partition( data, l, r, pivot, &cut_index );

// Achieve concurrent kernels by launching left- and // right-hand recursive sorts in separate command queues cudaStream_t s1, s2;

cudaStreamCreateWithFlags( &s1, cudaStreamNonBlocking );

cudaStreamCreateWithFlags( &s2, cudaStreamNonBlocking );

if ( l <= cut_index )

qsort<<< ..., 0, s1 >>>( data, l, cut_index );

if ( cut_index < r )

qsort<<< ..., 0, s2 >>>( data, cut_index+1, r );

}

float

(9)

Things to Watch Out For

§

Memory consistency is maintained only from parent thread to child grid (only for global memory)

§ I.e., if parent thread writes to global memory, then launches child grid, the child grid will see the value of the parent in global memory

§

However, not the other way round!

§ If child thread writes to global memory, parent thread might not see that value – only with proper synchronization (in a minute)

§ This is because of nesting / overlapping of child grid with parent grid

§

Example: __device__ int v = 0;

__global__ void childKernel( void ) { printf( "v = %d\n", v );

}

__global__ void parentKernel( void ) { v = 1;

childKernel<<<1,1>>>();

v = 2; // race condition!

(10)

§

Remember: all threads execute the same code!

§

How many threads running childKernel does this program launch?

§

220 ≈ 1 Million!

__global__

void childKernel( void ) { printf( "Secondary\n" );

}

__global__

void parentKernel( void ) { printf( "Primary\n" );

childKernel<<<32,32>>>();

}

int main( void ) {

printf( "Hello World!\n" );

parentKernel<<<32,32>>>();

cudaDeviceSynchronize();

return 0;

}

(11)

Things to Watch Out For

§

So, if you want only one child grid per parent thread block:

§

Caveat 1: because of asynchronous kernel launches, explicitly

synchronize by cudaDeviceSynchronize(void), if parent thread needs results of child grid

if ( threadIdx.x == 0 )

childKernel<<< num_blocks, n_threads_per_block>>>( );

__device__ volatile int v = 0;

__global__

void parentKernel( void ) { printf( "Primary\n" );

childKernel<<<32,32>>>( &v );

cudaDeviceSynchronize();

... do something with v }

(12)

§

Caveat 2: cudaDeviceSynchronize() only knows about the child grid(s) launched by "its own" (parent) block

§ It has no idea about the status / execution progress of any child grids launched by other threads in other (parent) blocks!

§ So, usually, the idiom to wait for the completion of the child grid is:

void parentKernel( void ) { ...

__syncthreads(); // common pattern here if ( threadIfx.x == 0 )

{

childKernel<<<n,m>>>( );

cudaDeviceSynchronize(); // does NOT imply barrier sync!

}

__syncthreads(); // all threads in block wait all threads in this block consume the child grids' data

(13)

Simple Example: cuBLAS Calls

__global__

void libraryCall( float *a, float *b, float *c )

{

// All threads generate data createData( a, b );

__syncthreads();

// Only one thread calls library if ( threadIdx.x == 0 )

{

cublasDgemm( a, b, c );

cudaDeviceSynchronize();

}

// All threads wait __syncthreads();

// Now continue consumeData( c );

}

(14)

§

Shared memory and local memory is private to each block of threads, cannot be seen by child threads

§

So, how to return a value from a child kernel?

§ Remember, kernel declarations need to be void __global__ void

child_kernel( void * p ) { ... }

__global__ void

parent_kernel( void ) {

...

int r = 0;

child_kernel<<<1, 256>>>( &r );

...

}

__global__ void

child_kernel( void * p ) { ... }

__device__ int r = 0;// global mem __global__ void

parent_kernel( void ) {

...

child_kernel<<<1, 256>>>( &r );

...

}

(15)

Things to Watch Out For

§

Caveat 3: beware of out-of-memory

§ Each child kernel launch requires ~150 MB of GPU memory - For storing state of all threads (potentially maximal number)

(16)

§

Reminder: we need to calculate this sequence for all "pixels"

§

Denote with escape count the first index i where

§

Recap: the "escape" algorithm to compute the Mandelbrot image

§ Input: cmin and cmax = lower left & upper right corner of window in , resp.

c C

z

i+1

= z

i2

+ c , z

0

= 0

|zi| > 2

for each pixel c inside (cmin,cmax) do in parallel:

esc = compute_escape_count( c, max_iter ) if ( esc == max_iter )

color pixel c with black else

convert esc to color, e.g., hue value in HSV set pixel c's color

c C

(17)

§

Observation: far away from , large regions have the same esc value

§

Theorem (w/o proof):

is connected

§

Consequence: if the border of a region R is (completely) inside , then all points of R are inside

§

Goal: exploit these two facts to create an adaptive algorithm

§ Algorithm will spend more time on pixels close to the border of

M

M

M M

No!

M

(18)

border of a sub-rectangle inside the current window

2. Check whether or not all esc values on this border are equal

§ If yes, color all pixels in this sub- rectangle with same color

§ If no, partition rectangle into smaller sub-rectangles and recurse

§ Stop recursion, if size of rectangle is

"small enough"

(19)

A Few Details

§

Esc values for all pixels on the border:

§ Don't necessarily need to sample the border by pixel distance

§ Could sample border more densely or more coarsely

§ Store esc values in shared memory (for second step)

§

Check whether all escape values are equal:

§ Use parallel reduction with '==' as binary operator

(20)

int main( void ) {

// set up initial grid of blocks dim3 threads_per_block( ... );

dim3 blocks( ... );

// call kernel with user-defined rectangle of C mandelbrot<<<...>>>( image, cmin, cmax );

}

(21)

__global__ mandelbrot( Color image[][], cmin, cmax ) {

// each block corresponds to a rectangle in [cmin,cmax]

if ( size(rectangle(blockIdx)) ) < blockDim.x * blockDim.y ) every thread within block computes its "own" pixel color else

{

// every thread calculates esc for a point on the border

__shared__ int esc[blockDim.x][blockDim.y]; // overkill calcBorderEscValues( blockIdx, blockDim, cmin, cmax, esc );

reduce esc esc[0][0] contains answer if ( esc[0][0] says "equal" )

parallel fill this block's rectangle with color(esc) else

{

if ( threadIdx.x != 0 || threadIdx.y != 0 )

return; // only thread 0 will launch sub-grid cmin_of_block = cmin + (cmax-cmin)/blockDim * blockIdx;

cmax_of_block = ...

mandelbrot<<<...>>>( image, cmin_of_block, cmax_of_block );

} } }

(22)

0 500 1000 1500 2000 2500

4k 8k 16k

Max Iter. = 512, per-pixel Max Iter. = 512, adaptive

Mpixels/ sec

Image size (pixels)

n interactive his algorithm?

on my Mac J

and have a few sliders)

(23)

A General Pattern for Solvers on Grids

§

Whenever you solve differential equations on a grid, consider to use the multigrid method

§ E.g., heat transfer

§ Propagate values on coarse grid (distribute over wide range) and on fine grids (for accuracy)

§

Several variants, how to switch from coarse to fine and back ("V cycle", "W cycle", …)

h

h 2h 4h 8h

(24)

§

Goal:

§ Fine-grained dynamic load balancing

§ Tasks can have different priorities

§

Scenario:

§ Launch kernel on a number of initial tasks

§ Each thread works on a task and produces more tasks

§ New tasks are assigned a priority (e.g. by the producing thread)

§

Solution:

§ "Megakernels" (one kernel runs until all tasks are done)

§ Must make sure that threads within block work on tasks of same type

(25)

Example Application: REYES Rendering Pipeline

§

Invented at Lucasfilm (later Pixar) by Cook et al. 1987

§

Goal: no compromise on visual quality (antialiasing, motion blur, transparency, etc.)

§ Compare to OpenGL

§

Implemented in Pixar's Renderman

§ And many more implementations

§ Pixar has defined a Renderman standard ("Postscript for 3D")

(26)
(27)

§

Available primitives: lots of high-level primitives!

§ Bezier- and B-Spline-Surfaces

§ Quadrics

§ Procedural objects, e.g. L-systems

§ Particle systems

§

(28)

1. Input: higher-order surfaces

2. Generate "micropolygons" from input (split & dice)

§ Split = recursively subdivide primitive into patches until "small" enough

§ Dice = uniformly tesselate patch into micropolygons (1/2 pixel)

3. Shade micropolygons

§ Shoot "shadow feelers" to light sources

§ Evaluate lighting model (e.g., Phong)

4. Perform stochastic sampling

§ Random sampling of screen pixels

§ Store samples (color & depth)

5. Compose using A-buffer

§ For anti-aliasing and transparency

(29)

Rough Sketch of the Reyes Pipeline

§

Split & dice:

Bound & Cull

Dice Diceable?

No:

Split

Yes: Dice Initial primitive

Visible

(30)

Bound & Cull

Dice Diceable?

No:

Split

Yes: Dice Initial primitive

Visible

Micropolygons

v

u

1 Pixel

(31)

Remarks on the Advantages of the Reyes Architecture

§

Only one shader programming stage (shading of micropolygons)

§ Can do anything you want in the shader (e.g., cast rays, displacement of geometry,

§

Supports lots of primitives

§ They just have to bring a procedure for subdivision along u,v

§

Micropolygons can be shaded in parallel trivially

§

Simplicity:

§ No attribute interpolation necessary (during rasterization)

§ No perspective correction necessary (like for texture coords)

§

Resolution of the final geometry adapts to viewpoint distance autom.

§

Allows for expensive effects, e.g., motion blur, depth-of-field, …

§

Render times: approx. 3 hours / frame (2-29h)

(32)

§

Idea: spend more time on image region where user is looking

Cursor gives eye gaze position

(33)

Fine-Grain Task Scheduling

§ Requirements:

1. Maintain multiple work queues for different task types 2. Lock-free enqueuing and dequeuing of tasks (mostly)

3. Reorganization of queues wrt. task priorities must not block worker threads

§

Methods to organize queues by priority:

high-priority queue

low prio high prio

low-priority queue

enqueue

dequeue

enqueu e Filter by

priority Filter by task type

I II

(34)

§

Assume hundreds of queues

§

When block of N threads is finished, it requests N new tasks

§ Use these threads to perform bottom-up search for the queue

1.Put queue occupancies in array 2.Perform segmented max-scan 3.Reduction: find first element > N

§ Binary operator ⬦ :

a ⇧ b = 8>

<

>:

a ,a N

b ,a < N ^ b N 0 , else

<latexit sha1_base64="oAcHuTcluIck21Bnt7tIAV9nnrE=">AAACs3icZVFdaxQxFM2MH13Hj271UZBgURTqMlMQfVAo+uKTtOC2hc2yZDJ3pqH5GJMb22XY/+VP0Td/iZjZ3ULXXgg5Offk3HBStkp6zPPfSXrr9p27W4N72f0HDx9tD3ceH3sbnICxsMq605J7UNLAGCUqOG0dcF0qOCnPP/f9kx/gvLTmG85bmGreGFlLwTFSs6HjlFWSa2sqWtKPNGMlNNJ0Inr6RcbpS7pHo6aB7/QrZSwr18yH/rhH2QVUDfSovCbKlyKGcIkdKA+LjIGp1qaz4W4+ypdFb4JiDXYPnv08+kMIOZztJK9ZZUXQYFAo7v2k2G9x2nGHUqjePHhouTjnDXTLSBb0RaQqWlsXl0G6ZDd0eq45nkVhv/nrrUnA+v20k6YNCEasvOqgKFraR0gr6UCgmlMuRHxS4BhHiTPuuMAY9cYYz41fDWJXMGMODFwIqzWPobCaa6nmFdQ8KFx0zNdXeNNJBiPxMpJeesDQdi24N9pWEL+t7mfHD+3DLf6P8iY43h8V+ag4iil/IqsakKfkOXlFCvKOHJAv5JCMiSC/yN9kKxmkb9NJWqbVSpom6ztPyEal+h/XjtNf</latexit><latexit sha1_base64="RD6ExBWUHO/hYRmm5fEO4UZ6IVI=">AAACs3icZVFdaxQxFM2MH13Hr219FEqwKAp1mSmIPlQo+uKTtOC2hc2yZDJ3tqH5GJMb22XYR/9Tf4r+Bv+DmNndQtdeCDk59+TccFI2SnrM899JeufuvfsbvQfZw0ePnzztb24dexucgKGwyrrTkntQ0sAQJSo4bRxwXSo4Kc8/d/2TH+C8tOYbzhoYaz41spaCY6QmfccpqyTX1lS0pB9pxkqYStOK6OnnGaev6C6Nmil8p18pY1m5Yva74y5lF1BNoUPlDVG+EDGES2xBeZhnDEy1Mp30d/JBvih6GxQrsHOwfXX05+f21eFkM3nDKiuCBoNCce9HxV6D45Y7lEJ15sFDw8U5n0K7iGROX0aqorV1cRmkC3ZNp2ea41kUdpu/2RoFrD+MW2magGDE0qsOiqKlXYS0kg4EqhnlQsQnBY5xlDjjjguMUa+N8dz45SB2DTPmwMCFsFrzGAqruZZqVkHNg8J5y3x9jdedZDASLyPppQcMTduAe6ttBfHb6m52/NAu3OL/KG+D471BkQ+Ko5jyJ7KsHnlOXpDXpCDvyQH5Qg7JkAjyi/xNNpJe+i4dpWVaLaVpsrrzjKxVqv8BMPLUxQ==</latexit><latexit sha1_base64="RD6ExBWUHO/hYRmm5fEO4UZ6IVI=">AAACs3icZVFdaxQxFM2MH13Hr219FEqwKAp1mSmIPlQo+uKTtOC2hc2yZDJ3tqH5GJMb22XYR/9Tf4r+Bv+DmNndQtdeCDk59+TccFI2SnrM899JeufuvfsbvQfZw0ePnzztb24dexucgKGwyrrTkntQ0sAQJSo4bRxwXSo4Kc8/d/2TH+C8tOYbzhoYaz41spaCY6QmfccpqyTX1lS0pB9pxkqYStOK6OnnGaev6C6Nmil8p18pY1m5Yva74y5lF1BNoUPlDVG+EDGES2xBeZhnDEy1Mp30d/JBvih6GxQrsHOwfXX05+f21eFkM3nDKiuCBoNCce9HxV6D45Y7lEJ15sFDw8U5n0K7iGROX0aqorV1cRmkC3ZNp2ea41kUdpu/2RoFrD+MW2magGDE0qsOiqKlXYS0kg4EqhnlQsQnBY5xlDjjjguMUa+N8dz45SB2DTPmwMCFsFrzGAqruZZqVkHNg8J5y3x9jdedZDASLyPppQcMTduAe6ttBfHb6m52/NAu3OL/KG+D471BkQ+Ko5jyJ7KsHnlOXpDXpCDvyQH5Qg7JkAjyi/xNNpJe+i4dpWVaLaVpsrrzjKxVqv8BMPLUxQ==</latexit><latexit sha1_base64="RD6ExBWUHO/hYRmm5fEO4UZ6IVI=">AAACs3icZVFdaxQxFM2MH13Hr219FEqwKAp1mSmIPlQo+uKTtOC2hc2yZDJ3tqH5GJMb22XYR/9Tf4r+Bv+DmNndQtdeCDk59+TccFI2SnrM899JeufuvfsbvQfZw0ePnzztb24dexucgKGwyrrTkntQ0sAQJSo4bRxwXSo4Kc8/d/2TH+C8tOYbzhoYaz41spaCY6QmfccpqyTX1lS0pB9pxkqYStOK6OnnGaev6C6Nmil8p18pY1m5Yva74y5lF1BNoUPlDVG+EDGES2xBeZhnDEy1Mp30d/JBvih6GxQrsHOwfXX05+f21eFkM3nDKiuCBoNCce9HxV6D45Y7lEJ15sFDw8U5n0K7iGROX0aqorV1cRmkC3ZNp2ea41kUdpu/2RoFrD+MW2magGDE0qsOiqKlXYS0kg4EqhnlQsQnBY5xlDjjjguMUa+N8dz45SB2DTPmwMCFsFrzGAqruZZqVkHNg8J5y3x9jdedZDASLyPppQcMTduAe6ttBfHb6m52/NAu3OL/KG+D471BkQ+Ko5jyJ7KsHnlOXpDXpCDvyQH5Qg7JkAjyi/xNNpJe+i4dpWVaLaVpsrrzjKxVqv8BMPLUxQ==</latexit><latexit sha1_base64="P8He1Ny7Cabq6eeheL4DEfIpHjU=">AAACs3icZVFdaxQxFM2MH13Hj2710ZfgoijUZaYg7YNC0RefpILbFjbLkknubEOTzJjc2C7D/k/9M2JmdwpdeyHk5NyTc8NJ2WjlMc//JOm9+w8e7gweZY+fPH22O9x7furr4ARMRK1rd15yD1pZmKBCDeeNA25KDWfl5Zeuf/YLnFe1/YHLBmaGL6yqlOAYqfnQccqk4qa2kpb0E81YCQtlWxE9/Srj9A3dp1GzgJ/0G2UsK3vmY3fcp+wK5AI6VN4S5WsRQ7jGFrSHVcbAyt50Phzl43xd9C4oejAifZ3M95J3TNYiGLAoNPd+Whw0OGu5QyV0Zx48NFxc8gW060hW9HWkJK1qF5dFuma3dGZpOF5EYbf5261pwOpo1irbBAQrNl5V0BRr2kVIpXIgUC8pFyI+KXCMo8QFd1xgjHprjOfWbwaxG5gxBxauRG0Mj6GwihullxIqHjSuWuarG7ztpIJVeB1JrzxgaNoG3HtTS4jfVnWz44d24Rb/R3kXnB6Mi3xcfM9Hx5/7mAfkJXlF3pKCHJJj8pWckAkR5Df5m+wkg/RDOk3LVG6kadLfeUG2KjX/APxn0Rg=</latexit>

1.

2.

3.

0 0 0 0 3 4 7 6 1 0 4 7

4

count

segmented max-scan first > N

(35)

Updating Queues Containing Different Priorities

§

Queues containing tasks with different priorities must be reordered, when new tasks are enqueued

§

Locking the queue is not an option

§

Observation: perfect sorting is not necessary (in most scenarios)

§

Idea:

§ Continuously sort inside moving window using a (configurable) number of sorting threads

§ Keep safety margin to front of queue, to allow for task dequeueing

Sort Advance Sort

Finish/Restart

(36)

§

Fill "initial" (orange) queue with primitives from host

§

Priorities: 1. orange, 2. green, 3. split

§ Split q's contain tasks with different prio's

§

Priority of patches = function of patch size on screen and

distance from look-at point

§

Enqueue split tasks into red & blue q's

§

Time critical rendering:

kernels works on tasks as long as time budget is not exhausted

§ Use cycle counters of the multiprocessors on the GPU

Initial primitives Dice & shade tasks

High prio

Low prio

Split tasks

u v u v

(37)

More Results

Full quality: 58 msec Adaptive quality: 20 msec

Look-at

Splits

Difference, exaggerated 200x

Referenzen

ÄHNLICHE DOKUMENTE

As expected, cuckoo hashing is highly robust against answering bad queries (Figure 6.5) and its performance degrades linearly as the average number of probes approaches the

§  Awareness of the issues (and solutions) when using massively parallel architectures.. §  Programming skills in CUDA (the language/compiler/frameworks for

§  Synchronization usually involves waiting by at least one task, and can therefore cause a parallel application's execution time to increase. §  Granularity :=

§  Device memory pointers (obtained from cudaMalloc() ). §  You can pass each kind of pointers around as much as you

One method to address this problem is the Smart Grid, where Model Predictive Control can be used to optimize energy consumption to match with the predicted stochastic energy

§  Assume the scan operation is a primitive that has unit time costs, then the following algorithms have the following complexities:.. 38

B.  For each number x in the list, cut a spaghetto to length x list = bundle of spaghetti &amp; unary repr.. C.  Hold the spaghetti loosely in your hand and tap them on

The minimum number of observations per node necessary for splitting minsplit is set to 10 here, because 10 observations are available for each subject and we want to be able to