VCLab

 

RESEARCH AREAS   PEOPLE   PUBLICATIONS   COURSES   ABOUT US
Home / Research Areas

indicator

Research Areas: Computer Graphics & Vision

publication list wordle  

Visual computing defines technologies and applications that integrate computer graphics with computer vision. Computer graphics describes foundations and applications of acquisition, representation, and interaction with the three-dimensional (3D) real and the virtual world, while computer vision allows for a deeper understanding of the real world in a form of two-dimensional (2D) images or video. The Visual Computing Laboratory (VCLAB) at KAIST is particularly interested in visual phenomena related with light transport from a light source to the visual perception in our brain via light traversal over 3D surfaces. The abstracts of our research include the fundamental elements of the real world: light, color, geometry, simulation, and even interaction among these elements. In particular, we are focusing on acquiring material appearance for better color representation in 3D graphics, hyperspectral 3D imaging for a deeper physical understanding of light transport, and color perception in 3D for deeper understanding color. Our contributions in this research allow for various hardware designs and software applications of visual computing.   - Dr. Min H. Kim

The titles of all our publications are visualized by Wordle in October 2017.


High-Performance Advanced Imaging:

hdr characterization  

3D Imaging Spectroscopy

We introduce an end-to-end measurement system for capturing spectral data on 3D objects. We developed a compressive sensing imager to make it suitable for acquiring such data in a hyperspectral range at high spectral and spatial resolution. We fully characterize the imaging system, and document its accuracy. This imager is integrated into a 3D scanning system to enable the measurement of the diffuse spectral reflectance and fluorescence.

hdr characterization  

High-Dynamic-Range Color Reproduction

Classical color reproduction systems fail to reproduce HDR images due to the dynamic range of luminance present in HDR images. Motivated by the idea to bridge the gap between cross-media color reproduction and HDR imaging, this project investigates the fundamentals and the infrastructure of cross-media color reproduction and restructures them with respect to HDR imaging, and develops a novel reproduction system for HDR imaging.

hdr characterization  

High-Dynamic-Range Imaging

Digital imaging has become a standard practice but are optimized for plausible visual reproduction of a physical scene. However, visual reproduction is just one application of digital images. We propose a novel characterization technique for HDR imaging, allowing us to build a physically-meaningful HDR radiance map to measure real-world radiance. The achieved accuracy of this technique rivals that of a spectroradiometer.

Publications:

  • Seung-Hwan Baek, Incheol Kim, Diego Gutierrez, Min H. Kim (2017), “Compact Single-Shot Hyperspectral Imaging Using a Prism,” ACM Transactions on Graphics (Proc. SIGGRAPH Asia 2017), 36(4), pp. 217:1--12

  • Min H. Kim, Todd Alan Harvey, David S. Kittle, Holly Rushmeier, Julie Dorsey, Richard O. Prum, David J. Brady (2011), “3D Imaging Spectroscopy for Measuring Hyperspectral Patterns on Solid Objects,” ACM Transactions on Graphics (Proc. SIGGRAPH 2012), 31, pp. 1–11

  • Min H. Kim, Jan Kautz (2008), “Characterization for High Dynamic Range Imaging,” Computer Graphics Forum (Proc. EUROGRAPHICS 2008), 27(2), April 2008, pp. 691–697

 

   


Machine Learning-based Graphics and Vision:

xlrcam  

Deep Learning-based Advanced Spectral Imaging

We developed a novel hyperspectral imaging system that can provide very high accuracy in reconstructing spectral information from compressive input. We built a spatio-spectral compressive imager, which incorporates without our spectral reconstruction algorithm that can provide high spatial and spectral resolution, overcoming the long-last tradeoff of compressive hyperspectral imaging.

insitu  

Joint Learning-Based High-Dynamic-Range Imaging

We propose an interlaced HDR imaging via joint learning. It jointly solves two traditional problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. We first solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extend dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows.

Publications:

  • Inchang Choi, Daniel S. Jeon, Giljoo Nam, Diego Gutierrez, Min H. Kim (2017), “High-Quality Hyperspectral Reconstruction Using a Spectral Prior,” ACM Transactions on Graphics (presented at SIGGRAPH Asia 2017), 36(6)

  • Julio Marco, Quercus Hernandez, Adolfo Munoz, Yue Dong, Adrian Jarabo, Min H. Kim, Xin Tong, Diego Gutierrez (2017), “DeepToF: Off-the-Shelf Real-Time Correction of Multipath Interference in Time-of-Flight Imaging,” ACM Transactions on Graphics (presented at SIGGRAPH Asia 2017), 36(6)

  • Inchang Choi, Seung-Hwan Baek, and Min H. Kim (2017), “Reconstructing Interlaced High-Dynamic-Range Video using Joint Learning,” IEEE Transactions on Image Processing (TIP), 26(11), November 2017, pp. 5353 - 5366

 

   


Color Visual Perception:

xlrcam  

High-Dynamic-Range Color Appearance Model

We developed a novel color appearance model that not only predicts human visual perception but is also directly applicable to HDR imaging. We built a customized display device, which produces high luminances in order to conduct color experiments. The scientific measurements of human color perception from these experiments enables me to derive a color appearance model which can cover the full range of the human visual system.

insitu  

Spatially-Varying Appearance Model

Color perception is recognized to vary with surrounding spatial structure, but the impact of edge smoothness on color has not been studied in color appearance modeling. We study the appearance of color under different degrees of edge smoothness. Based on our experimental data, we have developed a computational model that predicts this appearance change. The model can be integrated into existing color appearance models.

Publications:

  • Min H. Kim, Tobias Ritschel, Jan Kautz (2011), “Edge-Aware Color Appearance,” ACM Transactions on Graphics (presented at SIGGRAPH 2011), 30(2), pp. 13:1–9

  • Min H. Kim, Tim Weyrich, Jan Kautz (2009), “Modeling Human Color Perception under Extended Luminance Levels,” ACM Transactions on Graphics (Proc. SIGGRAPH 2009), 28(3), August 2009, pp. 27:1–9

 

   


Interactive Computer Graphics:

insitu  

Light-Weight Representation of Context

The only current methods for representing context involve designing in a heavy-weight computer-aided design system or sketching on a panoramic photo. The former is too cumbersome; the latter is too restrictive in viewpoint and in the handling of occlusions and topography. We introduce a novel approach to presenting context such that it is an integral component in a lightweight conceptual design system.

insitu  

Real-Time Rendering of Global Illumination

While the high-frequency nature of direct lighting requires accurate visibility, indirect illumination mostly consists of smooth gradations, which tend to mask errors due to incorrect visibility. We exploit this by approximating visibility for indirect illumination with imperfect shadow maps in conjunction with a global illumination algorithm, enabling indirect illumination of dynamic scenes at real-time frame rates.

Publications:

  • Patrick Paczkowski, Min H. Kim, Yann Morvan, Julie Dorsey, Holly Rushmeier, Carol O’Sullivan (2011), “Insitu: Sketching Architectural Designs in Context,” ACM Transactions on Graphics (Proc. SIGGRAPH Asia 2011), 30(6), pp. 182:1–10

  • Tobias Ritschel, Thorsten Grosch, Min H. Kim, Hans-Peter Seidel, Carsten Dachsbacher, Jan Kautz (2008), “Imperfect Shadow Maps for Efficient Computation of Indirect Illumination,” ACM Transactions on Graphics (Proc. SIGGRAPH Asia 2008), 27(5), December 2008, pp. 129:1–8

 

   


Research Collaboration:

insitu    lg    microsoft    sk    nrf    ciss    ciss    kohyoung    etri

© Visual Computing Laboratory, School of Computing, KAIST. All rights reserved.

KAIST