

The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics-unlike photorealistic rendering-to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis.

This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner.

Today, these models are often visualized in detail to provide realistic imagery. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives.
