EN FR
EN FR


Section: Overall Objectives

General Introduction

Computer generated images are ubiquitous in our everyday life. Such images are the result of a process that has seldom changed over the years: the optical phenomena due to the propagation of light in a 3D environment are simulated taking into account how light is scattered  [51], [28] according to shape and material characteristics of objects. The intersection of optics (for the underlying laws of physics) and computer science (for its modeling and computational efficiency aspects) provides a unique opportunity to tighten the links between these domains in order to first improve the image generation process (computer graphics, optics and virtual reality) and next to develop new acquisition and display technologies (optics, mixed reality and machine vision).

Most of the time, light, shape, and matter properties are studied, acquired, and modeled separately, relying on realistic or stylized rendering processes to combine them in order to create final pixel colors. Such modularity, inherited from classical physics, has the practical advantage of permitting to reuse the same models in various contexts. However, independent developments lead to un-optimized pipelines and difficult-to-control solutions since it is often not clear which part of the expected result is caused by which property. Indeed, the most efficient solutions are most often the ones that blur the frontiers between light, shape, and matter to lead to specialized and optimized pipelines, as in real-time applications (like Bidirectional Texture Functions  [61] and Light-Field rendering  [26]). Keeping these three properties separated may lead to other problems. For instance:

  • Measured materials are too detailed to be usable in rendering systems and data reduction techniques have to be developed  [59], [62], leading to an inefficient transfer between real and digital worlds;

  • It is currently extremely challenging (if not impossible) to directly control or manipulate the interactions between light, shape, and matter. Accurate lighting processes may create solutions that do not fulfill users' expectations;

  • Artists can spend hours and days in modeling highly complex surfaces whose details will not be visible  [82] due to inappropriate use of certain light sources or reflection properties.

Most traditional applications target human observers. Depending on how deep we take into account the specificity of each user, the requirement of representations, and algorithms may differ.

Figure 1. Examples of new display technologies. Nowadays, they are not limited to a simple array of 2D low-dynamic RGB values.
IMG/3DS.png IMG/BrightSide.png IMG/hasan.png
Auto-stereoscopy display HDR display Printing both geometry and material
©Nintendo ©Dolby Digital [43]

With the evolution of measurement and display technologies that go beyond conventional images (e.g., as illustrated in Figure 1, High-Dynamic Range Imaging  [72], stereo displays or new display technologies  [47], and physical fabrication  [17], [35], [43]) the frontiers between real and virtual worlds are vanishing  [31]. In this context, a sensor combined with computational capabilities may also be considered as another kind of observer. Creating separate models for light, shape, and matter for such an extended range of applications and observers is often inefficient and sometimes provides unexpected results. Pertinent solutions must be able to take into account properties of the observer (human or machine) and application goals.