Sketch2Scene: Sketch-based Co-retrieval
and Co-placement of 3D Models


ACM SIGGRAPH 2013

  Kun Xu1    Kang Chen1    Hongbo Fu2    Wei-Lun Sun1    Shi-Min Hu1

1Tsinghua University        2City University of Hong Kong

 


Without any user intervention, our framework automatically turns a freehand sketch drawing depicting multiple scene objects (left) to semantically valid, well arranged scenes of 3D models (right). (The ground and walls were manually added.)

Abstract
This work presents Sketch2Scene, a framework that automatically turns a freehand sketch drawing inferring multiple scene objects to semantically valid, well arranged scenes of 3D models. Unlike the existing works on sketch-based search and composition of 3D models, which typically process individual sketched objects one by one, our technique performs co-retrieval and co-placement of 3D relevant models by jointly processing the sketched objects. This is enabled by summarizing functional and spatial relationships among models in a large collection of 3D scenes as structural groups. Our technique greatly reduces the amount of user intervention needed for sketch-based modeling of 3D scenes and fits well into the traditional production pipeline involving concept design followed by 3D modeling. A pilot study indicates that it is promising to use our technique as an alternative but more efficient tool of standard 3D modeling for 3D scene construction. 
Paper

Supplemental Materials

Video

Download the video (.avi; 12M)

BibTeX
@article{Xu13sig,
author = {Kun Xu and Kang Chen and Hongbo Fu and Wei-Lun Sun and Shi-Min Hu},
title = {Sketch2Scene: Sketch-based Co-retrieval and Co-placement of 3D Models},
journal = {ACM Transactions on Graphics},
volume = {32},
number = {4},
year = {2013},
pages = {to appear},
}
Thanks

We thank the reviewers for their constructive comments, and the pilot study participants for their time. This work was supported by the National Basic Research Project of China (2011CB302205), the National High Technology Research and Development Program of China (2012AA011802), the Natural Science Foundation of China (61170153 and 61120106007) and Tsinghua University Initiative Scientific Research Program. Hongbo Fu was partially supported by grants from the Research Grants Council of HKSAR (No.113610, No.113513), and the City University of Hong Kong (No.7002925 and No.7002776).