SmartSweep: Context-aware Modeling on a Single Image

SIGGRAPH Asia 2017 Poster

Yilan Chen, Wenlong Meng, Shi-Qing Xin, Hongbo Fu

Starting with an input image (a), users can easily reconstruct the context (b) and create complementary models (c) using our tool. The models are thicken outwards, cut (for aesthetics) and printed without any scaling (d).


We present an image-based tool to facilitate creating complementary parts for existing physical objects. The key idea is to detect and utilize the contextual features to augment modeling.



This work was partially supported by the grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (CityU 11300615 and CityU 11204014), and NSFC (61772016).