• Keine Ergebnisse gefunden

As this was a design goal, most of the rendering algorithms described in this work can take advantage of modern programmable graphics hardware: they are implemented as vertex and fragment programs (see Fig. 8.1). The rendering process is split into two main parts.

In the first stage, the scene is rendered into a multiple buffer with all the attributes necessary for further processing. Level of detail control, leaf primitive and 3D shape abstraction is carried during the actual rendering. The computation of the parameters derived from the HLP model - most notably the HLP normal vector projection,E~ -as well -as the hatch texture parametrization is also performed here and stored into 2D buffers.

In the second part, the attributes collected in the multiple buffers are processed in order to construct and display the different illustration elements, which are combined together according to the parameters specified by the user. The light tone for hatching is computed as a blurred grayscale image of the color image stored in the buffer and the pre-sampled hatch texture is applied accordingly. Image space contours are detected using depth buffer information and stylized by applying the silhouette art map after the computation of a 2D parametrization.

Figure 8.2:Layout of the rendering pipeline: part one is computing multiple buffers, part two does the final rendering.

The implementation and test setup was a 2.4 GHz Pentium 4 with nVidia GeForce 6600 graphics and the Microsoft DirectX API using 2.0-class vertex and pixel pro-grams (equivalent to OpenGL’s GLSL propro-grams). For rendering different attributes of the scene into a multiple buffer, we use the capability of modern graphics hard-ware that allow pixel programs to write into multiple render targets (frame buffers) and thus avoid multiple render passes.

Some implementation details of the different NPR visual features depicted in Fig. 8.3 is described in the rest of this section. Their integration is controlled by parameters which can be continuously varied between 0 and 1; for each technique, the associated parameters are briefly described.

8.2. IMPLEMENTATION 79

Figure 8.3:An overview of rendering styles. The axes correspond to sliders in our interactive editor.

8.2.1 High level abstraction.

The computation of the HLP model, together with level of detail, is done in a pre-processing step and following attributes are stored at each vertex: the normalized HLP normal, N~HLP and the distance dHLP from the original position to the HLP surface.

Parameters. At rendering time, a parameter controls the blending between the origi-nal and the HLP geometry, which is implemented in the vertex program as described in Eq. 4.8

8.2.2 Silhouette detection

Real-time silhouette detection is implemented in the post-processing shader. The edge detection filters have been described in Section 5.1 and are straightforward to implement.

Parameters. The user can specify the edge detection threshold and thus can display more or less contour lines.

8.2.3 Silhouette stylization

This part is also implemented in a post-process pixel program, which relies on the 1-pixel wide contours detected as in the previous section. The core of the algorithm is

the 2D parametrization of the lines which is performed in image-space by computing (u, v)coordinates at each pixel.

For thevcoordinate perpendicular to the contour, a search in the frame buffer is per-formed, starting from the current pixel in the direction indicated by the vectorE, in~ order to determine the distance to the furthest contour pixel within a neighborhood.

Because pixel programs are limited with respect to the number of available instruc-tions (loops must be unrolled), one must set a finite limit to the search distance.

The maximum width of the silhouette support that is handled in our implementation is 16 pixels, but this number can be increased at the cost of longer shaders: each ad-ditional pixel requires a texture sampling instruction, as well as updating the distance to the origin of the search.

Parameters. The user can specify the stroke texture (silhouette art map - SAM), the width of the contour support (up to the maximum size allowed by the implemen-tation) and the scaling of the SAM along the contour (thus being able to stretch or squeeze the strokes).

8.2.4 Leaf shape and size

Leaves are rendered either as variable-size particles or as billboard clouds. In the former case, each particle is represented by a 3D location which is expanded to a camera-facing triangle in the vertex shader. The variable shape leaf texture is mapped onto this triangle and the shape is determined by the alpha threshold. Optionally, the leaf outline is drawn by darkening the pixels with alpha in the vicinity of the border alpha.

If billboard clouds are used, one cannot control the size of the primitives, but only the shape (more or less abstract).

Parameters. The size and the shape of the leaves, as well as the thickness of the leaf outline.

8.2.5 Hatching

The parametrization needed to apply hatch stroke textures (tonal art maps - TAMs) is computed in the vertex shader, as described in Section 6.1. Although it is applied in the post-processing, we preferred to sample the stroke texture in the first rendering stage (pixel shader) to avoid texture coordinate precision issues. Then, in a post-processing pixel shader, the desired tone combination is selected according to smooth lighting and contrast settings.

We also developed software tool to allow easy design of art maps (both SAMs and TAMs), DrawSTAM. The graphical user interface can be seen in Fig. 8.4. In order to increase interactivity, DrawSTAM can be used either as a stand-alone application or integrated in the rendering system. The second option allows interactive "on-the-fly"

painting of art maps with immediate feedback: as the user paints a stroke onto the art

8.2. IMPLEMENTATION 81

Figure 8.4:User Interface for drawing tonal art maps (TAM) and silhouette art maps (SAM).

map texture, it becomes immediately visible in the rendered image, which makes the user interaction very intuitive.

Parameters. The tonal art map (TAM) texture can be specified, as well as tone level and contrast (as the tone level becomes bright, hatch strokes are disappearing, because the brightest tone in a TAM is considered to be pure white, i.e., no strokes).

8.2.6 Color

Abstract color is computed in the first rendering stage, using a user controlled blend between the abstract shading model described in Section 6.2 and photorealistic shad-ing.

Parameters. The amount (saturation) of color that is being blended in the sketch can be specified. The abstraction degree of the color can also be controlled, from realistic to abstract.

8.2.7 Photorealism

In order to achieve maximum flexibility, the rendering pipeline also allows seamless transition to photorealistic rendering. Due to the parametrized structure of the whole NPR pipeline, this can be easily implemented by controlling a set of parameters during rendering. As the photorealism-sketch control parameter goes from 1 (sketch) to 0 (full photorealism), the following parameters also change their value:

• the high-level abstraction blending parameter changes from its currently set value to 0 (no HLP)

• edge detection filter threshold from current value to 1 (no silhouettes)

• line thickness from the current value to 0 (no lines)

• leaf shape from its current value to the original leaf shape

• hatch tone from the current value to brightest (no hatch strokes)

• color saturation from the current value to full color and color abstraction from the current value to realistic color

Additionally, several changes in the rendering process also occur, in order to account for changes in level-of-detail and the transition from abstract leaf primitives to the original model.

The need of a blend between sketch and photorealism may not be obvious, but, at a second glance, it offers several advantages:

• the possibility to easily obtain renditions in-between photorealism and sketch allows a continuous range of expression possibilities if the degree of realism is associated a semantic (for example, the degree of certitude in plan)

• seamless visual transitions as such can be used to create a language to express

"landscape stories"

8.2.8 Style combination

The visual style - i.e., the appearance of the scene in the final image - is controlled by the parameters corresponding to each sketchy element which can be set by the user. In a practical visualization setup, however, applying a single style over the entire scene is not very useful, as different parts of the scene need to be highlighted or separated from other (it was the very goal of this research to allow differentiation of landscape aspects). Thus, the ability to combine several style in the same view is essential.

Technically, this is achieved by splitting the original landscape in several layers - this is common practice in landscape planing and architecture anyway. Then, each layer rendered in a separate step using its own set of rendering parameters, thus its own style (see the images in the color plates - Fig. 8.9).