SIGGRAPH / SA (12), CHI (3), UIST (2), TVCG (10)

Qingkun Su*, Xue Bai, Hongbo Fu, Chiew-Lan Tai, and Jue Wang. Live Sketch: Video-driven Dynamic Deformation of Static Drawings.CHI 2018. (Acceptance rate: XX%). April 2018

Abstract: Creating sketch animations using traditional tools requires special artistic skills, and is tedious even for trained professionals. To lower the barrier for creating sketch animations, we propose a new system, Live Sketch, which allows novice users to interactively bring static drawings to life by applying deformation-based animation effects that are extracted from video examples. Dynamic deformation is first extracted as a sparse set of moving control points from videos and then transferred to a static drawing. Our system addresses a few major technical challenges, such as motion extraction from video, video-to-sketch alignment, and many-to-one motion driven sketch animation. While each of the sub-problems could be difficult to solve fully automatically, we present reliable solutions by combining new computational algorithms with intuitive user interactions. Our pilot study shows that our system allows both users with or without animation skills to easily add dynamic deformation to static drawings.

[Paper, Video]

Pui Chung Wong*, Kening Zhu, and Hongbo Fu. FingerT9: Leveraging thumb-to-finger interaction for same-side-hand text entry on smartwatches.CHI 2018. (Acceptance rate: XX%). April 2018.

Abstract: We introduce FingerT9, leveraging the action of thumb-to-finger touching on the finger segments, to support same-side-hand (SSH) text entry on smartwatches. This is achieved by mapping a T9 keyboard layout to the finger segments. Our solution avoids the problems of fat finger and screen occlusion, and enables text entry using the same-side hand which wears the watch. In the pilot study, we determined the layout mapping preferred by the users. We conducted an experiment to compare the text-entry performances of FingerT9, the tilt-based SSH input, and the direct-touch non-SSH input. The results showed that the participants performed significantly faster and more accurately with FingerT9 than the tilt-based method. There was no significant difference between FingerT9 and direct-touch methods in terms of efficiency and error rate. We then conducted the second experiment to study the learning curve on SSH text entry methods: FingerT9 and the tilt-based input. FingerT9 gave significantly better long-term improvement. In addition, eyes-free text entry (i.e., looking at the screen output but not the keyboard layout mapped on the finger segments) was made possible once the participants were familiar with the keyboard layout.

[Project, Paper, Video]

Qiang Fu*, Xiaowu Chen, Xiaotian Wang, Sijia Wen, Bin Zhou, and Hongbo Fu. Adaptive Synthesis of Indoor Scenes via Activity-Associated Object Relation Graphs. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH Asia 2017. November 2017. (Acceptance rate: 25.2%)

Abstract: We present a system for adaptive synthesis of indoor scenes given an empty room and only a few object categories. Automatically suggesting indoor objects and proper layouts to convert an empty room to a 3D scene is challenging, since it requires interior design knowledge to balance the factors like space, path distance, illumination and object relations, in order to insure the functional plausibility of the synthesized scenes. We exploit a database of 2D floor plans to extract object relations and provide layout examples for scene synthesis. With the labeled human positions and directions in each plan, we detect the activity relations and compute the coexistence frequency of object pairs to construct activity-aware object relation graphs. Given the input room and user-specified object categories, our system first leverages the object relation graphs and the database floor plans to suggest more potential object categories beyond the specified ones to make resulting scenes functionally complete, and then uses the similar plan references to create the layout of synthesized scenes. We show various synthesis results to demonstrate the practicability of our system, and validate its usability via a user study. We also compare our system with the state-of-the-art furniture layout and activity-centric scene representation methods, in terms of functional plausibility and user friendliness.

[Project, Paper, Video]

Yuwei Li, Xin Luo, Youyi Zheng, Pengfei Xu, and Hongbo Fu. SweepCanvas: Sketch-based 3D prototyping on an RGB-D image. UIST 2017. Quebec City, Canada, Oct, 2017.

Abstract: The creation of 3D contents still remains one of the most crucial problems for the emerging applications such as 3D printing and Augmented Reality. In Augmented Reality, how to create virtual contents that seamlessly overlay with the real environment is a key problem for human-computer interaction and many subsequent applications. In this paper, we present a sketch-based interactive tool, which we term SweepCanvas, for rapid exploratory 3D modeling on top of an RGBD image. Our aim is to offer end-users a simple yet efficient way to quickly create 3D models on an image. We develop a novel sketch-based modeling interface, which takes a pair of user strokes as input and instantly generates a curved 3D surface by sweeping one stroke along the other. A key enabler of our system is an optimization procedure that extracts pairs of spatial planes from the context to position and sweep the strokes. We demonstrate the effectiveness and power of our modeling system on various RGB-D data sets and validate the use cases via a pilot study..

[Paper, Video]

Pengfei Xu*, Hongbo Fu, Chiew-Lan Tai, and Takeo Igarashi. GACA: Group-aware command-based arrangement of graphic elements. CHI 2015. Seoul, April, 2015.

Abstract: Many graphic applications rely on command-based arrangement tools to achieve precise layouts. Traditional tools are designed to operate on a single group of elements that are distributed consistently with the arrangement axis implied by a command. This often demands a process with repeated element selections and arrangement commands to achieve 2D layouts involving multiple rows and/or columns of well aligned and/or distributed elements. Our work aims to reduce the numbers of selection operation and command invocation, since such reductions are particularly beneficial to professional designers who design lots of layouts. Our key idea is that an issued arrangement command is in fact very informative, instructing how to automatically decompose a 2D layout into multiple 1D groups, each of which is compatible with the command. We present a parameter-free, command-driven grouping approach so that users can easily predict our grouping results. We also design a simple user interface with pushpins to enable explicit control of grouping and arrangement. Our user study confirms the intuitiveness of our technique and its performance improvement over traditional command-based arrangement tools.

Project page

Pengfei Xu*, Hongbo Fu, Takeo Igarashi, and Chiew-Lan Tai. Global beautification of layouts with interactive ambiguity resolution. UIST 2014. Hawaii, Octobor, 2014.

Abstract: Automatic global beautification methods have been proposed for sketch-based interfaces, but they can lead to undesired results due to ambiguity in the user’s input. To facilitate ambiguity resolution in layout beautification, we present a novel user interface for visualizing and editing inferred relationships. First, our interface provides a preview of the beautified layout with inferred constraints, without directly modifying the input layout. In this way, the user can easily keep refining beautification results by interactively repositioning and/or resizing elements in the input layout. Second, we present a gestural interface for editing automatically inferred constraints by directly interacting with the visualized constraints via simple gestures. Our efficient implementation of the beautification system provides the user instant feedback. Our user studies validate that our tool is capable of creating, editing and refining layouts of graphic elements and is significantly faster than the standard snap-dragging and command-based alignment tools.

Project page

Zhe Huang, Hongbo Fu, and Rynson W. H. Lau. Data-driven segmentation and labeling of freehand sketches. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH Asia 2014. December 2014. (Acceptance rate: 19.6%)

Abstract: We present a data-driven approach to derive part-level segmentation and labeling of free-hand sketches, which depict single objects with multiple parts. Our method performs segmentation and labeling simultaneously, by inferring a structure that best fits the input sketch, through selecting and connecting 3D components in the database. The problem is formulated using Mixed Integer Programming, which optimizes over both the local fitness of the selected components and the global plausibility of the connected structure. Evaluations show that our algorithm is significantly better than the straightforward approaches based on direct retrieval or part assembly, and can effectively handle challenging variations in the sketch.

Project page

Qingkun Su*, Wing Ho Andy Li*, Jue Wang and Hongbo Fu. EZ-Sketching: three-level optimization for error-tolerant image tracing. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH 2014. 32(4). Article No. 54. August 2014. (Acceptance rate: 25.1%)

Abstract: We present a new image-guided drawing interface called EZ-Sketching, which uses a tracing paradigm and automatically corrects sketch lines roughly traced over an image by analyzing and utilizing the image features being traced. While previous edge snapping methods aim at optimizing individual strokes, we show that a co-analysis of multiple roughly placed nearby strokes better captures the user's intent. We formulate automatic sketch improvement as a three-level optimization problem and present an efficient solution to it. EZ-Sketching can tolerate errors from various sources such as indirect control and inherently inaccurate input, and works well for sketching on touch devices with small screens using fingers. Our user study confirms that the drawings our approach helped generate show closer resemblance to the traced images, and are often aesthetically more pleasing.

Project page

Kun Xu, Kang Chen, Hongbo Fu, Wei-Lun Sun, and Shi-Min Hu. Sketch2Scene: Sketch-based co-retrieval and co-placement of 3D models. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH 2013. 32(4). Article No. 123. July 2013. (Acceptance rate: 24%)

Abstract: This work presents Sketch2Scene, a framework that automatically turns a freehand sketch drawing inferring multiple scene objects to semantically valid, well arranged scenes of 3D models. Unlike the existing works on sketch-based search and composition of 3D models, which typically process individual sketched objects one by one, our technique performs co-retrieval and co-placement of 3D relevant models by jointly processing the sketched objects. This is enabled by summarizing functional and spatial relationships among models in a large collection of 3D scenes as structural groups. Our technique greatly reduces the amount of user intervention needed for sketch-based modeling of 3D scenes and fits well into the traditional production pipeline involving concept design followed by 3D modeling. A user study indicates that the 3D scenes automatically synthesized by our technique in seconds are comparable to those manually created by an artist in hours in terms of visual aesthetics.

Project page

Pengfei Xu*, Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai. Lazy Selection: a scribble-based tool for smart shape elements selection. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH Asia 2012. 31(6). Article No. 136. December 2012. (Acceptance rate: 24%)

Abstract: This paper presents Lazy Selection, a scribble-based tool for quick selection of one or more desired shape elements by roughly stroking through the elements. Our algorithm automatically refines the selection and reveals the user's intention. To give the user maximum flexibility but least ambiguity, our technique first extracts selection candidates from the scribble-covered elements by examining the underlying patterns and then ranks them based on their location and shape with respect to the user-sketched scribble. Such a design makes our tool tolerant to imprecise input systems and applicable to touch systems without suffering from the fat finger problem. A preliminary evaluation shows that compared to the standard click and lasso selection tools, which are the most commonly used, our technique provides significant improvements in efficiency and flexibility for many selection scenarios.

Project page

Chao-Hui Shen, Hongbo Fu, Kang Chen, and Shi-Min Hu. Strcuture recovery by part assembly. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH Asia 2012. 31(6). Article No. 172. December 2012. (Acceptance rate: 24%)

Abstract: This paper presents a technique that allows quick conversion of acquired low-quality data from consumer-level scanning devices to high-quality 3D models with labeled semantic parts and meanwhile their assembly reasonably close to the underlying geometry. This is achieved by a novel structure recovery approach that is essentially local to global and bottom up, enabling the creation of new structures by assembling existing labeled parts with respect to the acquired data. We demonstrate that using only a small-scale shape repository, our part assembly approach is able to faithfully recover a variety of high-level structures from only a single-view scan of man-made objects acquired by the Kinect system, containing a highly noisy, incomplete 3D point cloud and a corresponding RGB image.

Project page

Hongbo Fu, Shizhe Zhou*, Ligang Liu, and Niloy J. Mitra. Animated construction of line drawings. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH Asia 2011. 30(6). Article No. 133. December 2011. (Acceptance rate: 20.6%)

Abstract: Revealing the sketching sequence of a line drawing can be visually intriguing and used for video-based storytelling. Typically this is enabled based on tedious recording of artists' drawing process. We demonstrate that it is often possible to estimate a reasonable drawing order from a static line drawing with clearly defined shape geometry, which looks plausible to a human viewer. We map the key principles of drawing order from drawing cognition to computational procedures in our framework. Our system produces plausible animated constructions of input line drawings, with no or little user intervention. We test our algorithm on a range of input sketches, with varying degree of complexity and structure, and evaluate the results via a user study. We also present applications to gesture drawing synthesis and drawing animation creation especially in the context of video scribing.

Project page

Chao-Hui Shen, Shi-Sheng Huang, Hongbo Fu, and Shi-Min Hu. Adaptive partitioning of urban facades. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH Asia 2011. 30(6). Article No. 184. December 2011. (Acceptance rate: 20.6%). One of the nine highlights among all SIGGRAPH Asia 2011 papers

Abstract: Automatically discovering high-level facade structures in unorganized 3D point clouds of urban scenes is crucial for gigantic applications like digitalization of real cities. However, this problem is challenging due to poor-quality input data, contaminated with severe missing areas, noise and outliers. This work introduces the concept of adaptive partitioning to automatically derive a flexible and hierarchical representation of 3D urban facades. Our key observation is that urban facades are largely governed by concatenated and/or interlaced grids. Hence, unlike previous automatic facade analysis works which are restricted to globally rectilinear grids, we propose to automatically partition the facade in an adaptive manner, in which the splitting direction, the number and location of splitting planes are all adaptively determined. Such an adaptive partition operation is performed recursively to generate a hierarchical representation of the facade. We show that the concept of adaptive partitioning is also applicable to flexible and robust analysis of image facades. We evaluate our method on a dozen of LiDAR scans of various complexity and styles and the eTRIMS image database with 60 facade images. A series of applications that benefit from our approach are also demonstrated.

Project page

Shizhe Zhou, Hongbo Fu, Ligang Liu, Daniel Cohen-Or, and Xiaoguang Han. Parametric reshaping of human bodies in images. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH 2010. 29(4). Article No. 126. July 2010. (Acceptance rate: 26%). One of the six highlights among all SIGGRAPH 2010 papers
Parametric reshaping of human bodies in images

Abstract: We present an easy-to-use image retouching technique for realistic reshaping of human bodies in a single image. A model-based approach is taken by integrating a 3D whole-body morphable model into the reshaping process to achieve globally consistent editing effects. A novel body-aware image warping approach is introduced to reliably transfer the reshaping effects from the model to the image, even under moderate fitting errors. Thanks to the parametric nature of the model, our technique parameterizes the degree of reshaping by a small set of semantic attributes, such as weight and height. It allows easy creation of desired reshaping effects by changing the full-body attributes, while producing visually pleasing results even for loosely-dressed humans in casual photographs with a variety of poses and shapes.
Keywords: Image Manipulation, Portrait Retouching, Warping

Project page

Yu-Shuen Wang, Hongbo Fu, Olga Sorkine, Tong-Yee Lee, and Hans-Peter Seidel. Motion-aware temporal coherence for video resizing. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH Asia 2009. 28(5). Article No. 127. December 2009. (Acceptance rate: 25%)
Motion-aware temporal coherence for video resizing

Abstract: Temporal coherence is crucial in content-aware video retargeting. To date, this problem has been addressed by constraining temporally adjacent pixels to be transformed coherently. However, due to the motion-oblivious nature of this simple constraint, the retargeted videos often exhibit flickering and waving artifacts, especially when significant camera or object motions are involved. Since the feature correspondence across frames changes spatially with both camera and object motion, motion-aware treatment of features is required for video resizing. This motivated us to align consecutive frames by estimating interframe camera motion and to constrain relative positions in the aligned frames. To preserve object motion, we detect distinct moving areas of objects across multiple frames and constrain each of them to be resized consistently. We build a complete video resizing framework by incorporating our motion-aware constraints with an adaptation of the scale-and-stretch optimization recently proposed by Wang and colleagues. Our streaming implementation of the framework allows efficient resizing of long video sequences with low memory cost. Experiments demonstrate that our method produces spatiotemporally coherent retargeting results even for challenging examples with complex camera and object motion, which are difficult to handle with previous techniques.
Keywords: video retargeting, spatial and temporal coherence, optimization

Project page

Hongbo FuDaniel Cohen-Or, Gideon Dror, and Alla Sheffer. Upright orientation of man-made objects. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH 2008. 27(3). Article No. 42. August 2008. (Acceptance rate: 17.9%)
Motion-aware temporal coherence for video resizing

Abstract: Humans usually associate an upright orientation with objects, placing them in a way that they are most commonly seen in our surroundings. While it is an open challenge to recover the functionality of a shape from its geometry alone, this paper shows that it is often possible to infer its upright orientation by analyzing its geometry. Our key idea is to reduce the two-dimensional (spherical) orientation space to a small set of orientation candidates using functionality-related geometric properties of the object, and then determine the best orientation using an assessment function of several functional geometric attributes defined with respect to each candidate. Specifically we focus on obtaining the upright orientation for man-made objects that typically stand on some flat surface (ground, floor, table, etc.), which include the vast majority of objects in our everyday surroundings. For these types of models orientation candidates can be defined according to static equilibrium. For each candidate, we introduce a set of discriminative attributes linking shape to function. We learn an assessment function of these attributes from a training set using a combination of Random Forest classifier and Support Vector Machine classifier. Experiments demonstrate that our method generalizes well and achieves about 90% prediction accuracy for both a 10-fold cross-validation over the training set and a validation with an independent test set.

Project page

Oscar Kin-Chung Au, Hongbo Fu, Chiew-Lan Tai, and Daniel Cohen-Or. Handle-aware isolines for scalable shape editing. ACM Transaction on Graphics (TOG) special issue: Proceedings of ACM SIGGRAPH 2007. 26(3). Aricle No. 83. July 2007. (Acceptance rate: 23.7%) [citation]

Abstract: Handle-based mesh deformation is essentially a nonlinear problem. To allow scalability, the original deformation problem can be approximately represented by a compact set of control variables. We show the direct relation between the locations of handles on the mesh and the local rigidity under deformation, and introduce the notion of handle-aware rigidity. Then, we present a reduced model whose control variables are intelligently distributed across the surface, respecting the rigidity information and the geometry. Specifically, for each handle, the control variables are the transformations of the isolines of a harmonic scalar field representing the deformation propagation from that handle. The isolines constitute a virtual skeletal structure similar to the bones in skinning deformation, thus correctly capturing the low-frequency shape deformation. To interpolate the transformations from the isolines to the original mesh, we design a method which is local, linear and geometry-dependent. This novel interpolation scheme and the transformation-based reduced domain allow each iteration of the nonlinear solver to be fully computed over the reduced domain. This makes the per-iteration cost dependent on only the number of isolines and enables compelling deformation of highly detailed shapes at interactive rates. In addition, we show how the handle-driven isolines provide an efficient means for deformation transfer without full shape correspondence.
Keywords: scalable shape editing, handle-aware, rigidity-aware, harmonic fields, isolines

Project page


Wanchao Su*, Dong Du*, Xin Yang*, Shizhe Zhou*, and Hongbo Fu. Interactive sketch-based normal map generation with deep neural networks. Proceedings of the ACM on Computer Graphics and Interactive Techniques (PACMGIT). Accepted for publication.

Abstract: High-quality normal maps are important intermediates for representing complex shapes. In this paper, we propose an interactive system for generating normal maps with the help of deep learning techniques. Utilizing the Generative Adversarial Network (GAN) framework, our method produces high quality normal maps with sketch inputs. In addition, we further enhance the interactivity of our system by incorporating user-specified normals at selected points. Our method generates high quality normal maps in real time. Through comprehensive experiments, we show the effectiveness and robustness of our method. A thorough user study indicates the normal maps generated by our method achieve a lower perceptual difference from the ground truth compared to the alternative methods.

[Paper, Video]

Mingze Yuan, Lin Gao, Hongbo Fu, and Shihong Xia. Temporal upsampling of depth maps using a hybrid camera. IEEE Transaction on Visualization and Computer Graphics (TVCG). Accepted for publication.

Abstract: In recent years, consumer-level depth cameras have been adopted for various applications. However, they often produce depth maps at only a moderately high frame rate (approximately 30 frames per second), preventing them from being used for applications such as digitizing human performance involving fast motion. On the other hand, low-cost, high-frame-rate video cameras are available. This motivates us to develop a hybrid camera that consists of a high-frame-rate video camera and a low-frame-rate depth camera and to allow temporal interpolation of depth maps with the help of auxiliary color images. To achieve this, we develop a novel algorithm that reconstructs intermediate depth maps and estimates scene flow simultaneously. We test our algorithm on various examples involving fast, non-rigid motions of single or multiple objects. Our experiments show that our scene flow estimation method is more precise than a tracking-based method and the state-of-the-art techniques.

[Paper, Video]

Qiang Fu*, Xiaowu Chen, Xiaoyu Su, and Hongbo Fu. Pose-inspired shape synthesis and functional hybrid. IEEE Transaction on Visualization and Computer Graphics (TVCG). 23(12): 2574-2585. December 2017.

Abstract: We introduce a shape synthesis approach especially for functional hybrid creation that can be potentially used by a human operator under a certain pose. Shape synthesis by reusing parts in existing models has been an active research topic in recent years. However, how to combine models across different categories to design multi-function objects remains challenging, since there is no natural correspondence between models across different categories. We tackle this problem by introducing a human pose to describe object affordance which establishes a bridge between cross-class objects for composite design. Specifically, our approach first identifies groups of candidate shapes which provide affordances desired by an input human pose, and then recombines them as well-connected composite models. Users may control the design process by manipulating the input pose, or optionally specifying one or more desired categories. We also extend our approach to be used by a single operator with multiple poses or by multiple human operators. We show that our approach enables easy creation of nontrivial, interesting synthesized models.

[Paper, Video]

Quoc Huy Phan*, Hongbo Fu, and Antoni Chan. Color Orchestra: Ordering color palettes for interpolation and prediction. IEEE Transaction on Visualization and Computer Graphics (TVCG). Accepted for publication.

Abstract: Color theme or color palette can deeply influence the quality and the feeling of a photograph or a graphical design. Although color palettes may come from different sources such as online crowd-sourcing, photographs and graphical designs, in this paper, we consider color palettes extracted from fine art collections, which we believe to be an abundant source of stylistic and unique color themes. We aim to capture color styles embedded in these collections by means of statistical models and to build practical applications upon these models. As artists often use their personal color themes in their paintings, making these palettes appear frequently in the dataset, we employed density estimation to capture the characteristics of palette data. Via density estimation, we carried out various predictions and interpolations on palettes, which led to promising applications such as photo-style exploration, real-time color suggestion, and enriched photo recolorization. It was, however, challenging to apply density estimation to palette data as palettes often come as unordered sets of colors, which make it difficult to use conventional metrics on them. To this end, we developed a divide-and-conquer sorting algorithm to rearrange the colors in the palettes in a coherent order, which allows meaningful interpolation between color palettes. To confirm the performance of our model, we also conducted quantitative experiments on datasets of digitized paintings collected from the Internet and received favorable results.

[Paper, Video, Media Coverage by MIT Technology Review]

Bin Sheng, Bowen Liu, Ping Li, Hongbo Fu, Lizhuang Ma, and Enhua Wu. Accelerated robust Boolean operations based on hybrid representations. Computer Aided Geometric Design (Special Issue on Geometric Modeling and Processing 2018). Accepted for publication. xxx 2018

Abstract: Constructive Solid Geometry (CSG) is one of the popular techniques that is widely applied in 3D modeling. It combines primitive solids using Boolean operations. However, the trade-off between efficiency and robustness of Boolean evaluation is difficult to balance. Previous methods sacrifice either efficiency or robustness to achieve advantages in one perspective. Recent works attempt to achieve excellent performance in both aspects through replacing the conventional vertex-based representations (V-reps) with plane-based representations (P-reps) of polyhedrons. Different from V-reps, the P-reps use plane coefficients as meta-data and can lead to benign robustness. However, methods using P-reps have disadvantages in efficiency compared to methods using V-reps. In this paper, we proposed a Boolean evaluation approach that absorbs both the efficiency of V-reps based methods and robustness of P-reps based methods. We design a Boolean evaluation method combining P-reps with V-reps. The P-reps information is utilized for exact predicate computation while information in V-reps is collected for fast topology query and coarse tests. Our proposed approach is variadic: it evaluates a Boolean expression regarding multi-input meshes as a whole rather than a tree of decomposed binary operations. We conduct massive experiments and compare our results with those generated by the state-of-the-art methods. Experimental results show that our approach is robust for solid inputs and has advantages in performance compared to some previous non-robust methods.


Bin Sheng, Ping Li, Hongbo Fu, Lizhuang Ma, Enhua Wu. Efficient non-incremental constructive solid geometry evaluation for triangular meshes. Graphical Models (Special issue on Computational Visual Media 2018). Volume 97: 1 - 16. May 2018

Abstract: 3D surface offsetting is a fundamental geometric operation in CAD/CAE/CAM. In this paper, we propose a super-linear convergent algorithm to generate a well-triangulated and feature-aligned offset surface based on particle system. The key idea is to distribute a set of moveable sites as uniformly as possible while keeping these sites at a specified distance away from the base surface throughout the optimization process. In order to make the final triangulation align with geometric feature lines, we use the moveable sites to predict the potential feature regions, which in turn guide the distribution of moveable sites. Our algorithm supports multiple kinds of input surfaces, e.g., triangle meshes, implicit functions, parametric surfaces and even point clouds. Compared with existing algorithms on surface offsetting, our algorithm has significant advantages in terms of meshing quality, computational performance, topological correctness and feature alignment.


Wenlong Meng*, Shuangmin Chen, Zhenyu Shu, Shi-Qing Xin, Hongbo Fu, and Changhe Tu. Efficiently computing feature-aligned and high-quality polygonal offset surfaces. Computers & Graphics (Special Issue on CAD/Graphics 2017). Volume 70: 62 - 70. Feb. 2018

Abstract: 3D surface offsetting is a fundamental geometric operation in CAD/CAE/CAM. In this paper, we propose a super-linear convergent algorithm to generate a well-triangulated and feature-aligned offset surface based on particle system. The key idea is to distribute a set of moveable sites as uniformly as possible while keeping these sites at a specified distance away from the base surface throughout the optimization process. In order to make the final triangulation align with geometric feature lines, we use the moveable sites to predict the potential feature regions, which in turn guide the distribution of moveable sites. Our algorithm supports multiple kinds of input surfaces, e.g., triangle meshes, implicit functions, parametric surfaces and even point clouds. Compared with existing algorithms on surface offsetting, our algorithm has significant advantages in terms of meshing quality, computational performance, topological correctness and feature alignment.


Sheng Yang, Kang Chen, Minghua Liu, Hongbo Fu, and Shi-Min Hu. Saliency-aware real-time volumetric fusion for object reconstruction. Computer Graphics Forum (Proceedings of Pacific Graphics 2017). 36(7): 167-174. October 2017

Abstract: We present a real-time approach for acquiring 3D objects with high fidelity using hand-held consumer-level RGB-D scanning devices. Existing real-time reconstruction methods typically do not take the point of interest into account, and thus might fail to produce clean reconstruction results of desired objects due to distracting objects or backgrounds. In addition, any changes in background during scanning, which can often occur in real scenarios, can easily break up the whole reconstruction process. To address these issues, we incorporate visual saliency into a traditional real-time volumetric fusion pipeline. Salient regions detected from RGB-D frames suggest user-intended objects, and by understanding user intentions our approach can put more emphasis on important targets, and meanwhile, eliminate disturbance of non-important objects. Experimental results on real-world scans demonstrate that our system is capable of effectively acquiring geometric information of salient objects in cluttered real-world scenes, even if the backgrounds are changing.

[Paper, Video]

Sheng Yang, Jie Xu, Kang Chen, and Hongbo Fu. View suggestion for interactive segmentation of indoor scenes. Comutational Visual Media. 3: 131. 2017

Abstract: Point cloud segmentation is a fundamental problem. Due to the complexity of real-world scenes and the limitations of 3D scanners, interactive segmentation is currently the only way to cope with all kinds of point clouds. However, interactively segmenting complex and large-scale scenes is very time-consuming. In this paper, we present a novel interactive system for segmenting point cloud scenes. Our system automatically suggests a series of camera views, in which users can conveniently specify segmentation guidance. In this way, users may focus on specifying segmentation hints instead of manually searching for desirable views of unsegmented objects, thus significantly reducing user effort. To achieve this, we introduce a novel view preference model, which is based on a set of dedicated view attributes, with weights learned from a user study. We also introduce support relations for both graph-cut-based segmentation and finding similar objects. Our experiments show that our segmentation technique helps users quickly segment various types of scenes, outperforming alternative methods.

[Paper, Video]

Shi-Sheng Huang, Hongbo Fu, Lin-Yu Wei, and Shi-Min Hu. Support Substructures: Support-induced part-level structural representation. IEEE Transaction on Visualization and Computer Graphics (TVCG). 22(8): 2024-36. Aug 2016

Abstract: In this work we explore a support-induced structural organization of object parts. We introduce the concept of support substructures, which are special subsets of object parts with support and stability. A bottom-up approach is proposed to identify such substructures in a support relation graph. We apply the derived high-level substructures to part-based shape reshuffling between models, resulting in nontrivial functionally plausible model variations that are difficult to achieve with symmetry-induced substructures by the state of the art. We also show how to automatically or interactively turn a single input model to new functionally plausible shapes by structure rearrangement and synthesis, enabled by support substructures. To the best of our knowledge no single existing method has been designed for all these applications.

[Paper, Video]

Qiang Fu*, Xiaowu Chen, Xiaoyu Su, Jia Li, and Hongbo Fu. Structure-adaptive Shape Editing for Man-made Objects. Computer Graphics Forum (Proceedings of Eurographics 2016). 35(2): 27-36. May 9-13, 2016.

Abstract: One of the challenging problems for shape editing is to adapt shapes with diversified structures for various editing needs. In this paper we introduce a shape editing approach that automatically adapts the structure of a shape being edited with respect to user inputs. Given a category of shapes, our approach first classifies them into groups based on the constituent parts. The group-sensitive priors, including both inter-group and intra-group priors, are then learned through statistical structure analysis and multivariate regression. By using these priors, the inherent characteristics and typical variations of shape structures can be well captured. Based on such group-sensitive priors, we propose a framework for real-time shape editing, which adapts the structure of shape to continuous user editing operations. Experimental results show that the proposed approach is capable of both structure-preserving and structure-varying shape editing.

[Paper, Video]

Wing Ho Andy Li*, Kening Zhu, and Hongbo Fu. Exploring the design space of bezel-initiated gestures for mobile interaction. International Journal of Mobile Human Computer Interaction. Volume 9 Issue 1, Jan. 2017.

Abstract: Bezel enables useful gestures supplementary to primary surface gestures for mobile interaction. However, the existing works mainly focus on researcher-designed gestures, which utilized only a subset of the design space. In order to explore the design space, we present a modified elicitation study, during which the participants designed bezel-initiated gestures for four sets of tasks. Different from traditional elicitation studies, ours encourages participants to design new gestures. We do not focus on individual tasks or gestures, but perform a detailed analysis of the collected gestures as a whole, and provide findings which could benefit designers of bezel-initiated gestures.

[Paper, Video]

Shi-Sheng Huang, Hongbo Fu, and Shi-Ming Hu. Structure guided interior scene synthesis via graph matching. Graphical Models. Volume 85, Pages 46-55, May 2016.

Abstract: We present a method for reshuffle-based 3D interior scene synthesis guided by scene structures. Given several 3D scenes, we form each 3D scene as a structure graph associated with a relationship set. Considering both the object similarity and relation similarity, we then establish a furniture-object-based matching between scene pairs via graph matching. Such a matching allows us to merge the structure graphs into a unified structure, i.e., Augmented Graph (AG). Guided by the AG , we perform scene synthesis by reshuffling objects through three simple operations, i.e., replacing, growing and transfer. A synthesis compatibility measure considering the environment of the furniture objects is also introduced to filter out poor-quality results. We show that our method is able to generate high-quality scene variations and out- performs the state of the art.


Qiang Fu*, Xiaowu Chen, Xiaoyu Su, and Hongbo Fu. Natural lines inspired 3D shape re-design. Graphical Models. Volume 85, Pages 1-10, May 2016.

Abstract: We introduce an approach for re-designing 3D shapes inspired by natural lines such as the contours and skeletons extracted from the natural objects in images. Designing an artistically creative and visually pleasing model is not easy for novice users. In this paper, we propose to convert such a design task to a computational procedure. Given a 3D object, we first compare its editable lines with various lines extracted from the image database to explore the candidate reference lines. Then a parametric deformation method is employed to reshape the 3D object guided by the reference lines. We show that our approach enables users to quickly create non-trivial and interesting re-designed 3D objects. We also conduct a user study to validate the usability and effectiveness of our approach.


Wing Ho Andy Li*, Hongbo Fu, and Kening Zhu. BezelCursor: Bezel-initiated cursor for one-handed target acquisition on mobile touch screens. International Journal of Mobile Human Computer Interaction. Volume 8, Issue 1, Jan-March 2016.

Abstract: We present BezelCursor, a novel one-handed thumb interaction technique for target acquisition on mobile touch screens of various sizes. Our technique combines bezel-initiated interaction and pointing gesture to solve the problem of limited screen accessibility afforded by the thumb. With a fixed, comfortable grip of a mobile touch device, a user may employ our tool to easily and quickly access a target located anywhere on the screen, using a single fluid action. Unlike the existing technologies, our technique requires no explicit mode switching to invoke and can be smoothly used together with commonly adopted interaction styles such as direct touch and dragging. Our user study shows that BezelCursor requires less grip adjustment, and is more accurate or faster than the state-of-the-art techniques when using a fixed secure grip.

Project page

Quoc Huy Phan*, Hongbo Fu, and Antoni Chan. FlexyFont: Learning transferring rules for flexible typeface synthesis. Computer Graphics Forum (Proceedings of Pacific Graphics 2015). 34(7): 245-256. Oct. 2015

Abstract: Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval-based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real-world design.

[Paper, Video]

Xiaoyu Su, Xiaowu Chen, Qiang Fu, and Hongbo Fu. Cross-class 3D object synthesis guided by reference examples. Computers & Graphics (Special Issue on CAD/Graphics 2015). 54: 145-153. Feb. 2016. Best Paper Award

Abstract: Re-combining parts of existing 3D object models is an interesting and efficient technique to create novel shape collections. However, due to the lack of direct parts’ correspondence across different shape families, such data-driven modeling approaches in literature are limited to the synthesis of in-class shapes only. To address the problem, this paper proposes a novel approach to create 3D shapes via re-combination of cross-category object parts from an existing database of different model families. In our approach, a reference shape containing multi-functional constituent parts is pre-specified by users, and its design style is then reused to guide the creation process. To this end, the functional substructures are first extracted for the reference shape. After that, we explore a series of category pairs which are potential replacements for the functional substructures of the reference shape to make interesting variations. We demonstrate our ideas using various examples, and present a user study to evaluate the usability and efficiency of our technique.


Changqing Zou*, Shifeng Chen, Hongbo Fu, and Jianzhuang Liu. Progressive 3D reconstruction of planar-faced manifold objects with DRF-based line drawing decomposition. IEEE Transaction on Visualization and Computer Graphics (TVCG). 21(2): 252-263. Feb. 2015.

Abstract: This paper presents an approach for reconstructing polyhedral objects from single-view line drawings. Our approach separates a complex line drawing representing a manifold object into a series of simpler line drawings, based on the degree of reconstruction freedom (DRF). We then progressively reconstruct a complete 3D model from these simpler line drawings. Our experiments show that our decomposition algorithm is able to handle complex drawings which are challenging for the state of the art. The advantages of the presented progressive 3D reconstruction method over the existing reconstruction methods in terms of both robustness and efficiency are also demonstrated.


Changqing Zou*, Xiaojiang Peng, Hao Lv, Shifeng Chen, Hongbo Fu, and Jianzhuang Liu. Sketch-based 3-D modeling for piecewise planar objects in single images. Computers & Graphics (Special Issue of SMI 2014). 46(2015): 130-137. Feb. 2015.

Abstract: 3-D object modeling from single images has many applications in computer graphics and multimedia. Most previous 3-D modeling methods which directly recover 3-D geometry from single images require user interactions during the whole modeling process. In this paper, we propose a semi-automatic 3-D modeling approach to recover accurate 3-D geometry from a singe image of a piecewise planar object with less user interaction. Our approach concentrates on these three aspects: 1) requiring rough sketch input only, 2) accurate modeling for a large class of objects, and 3) automatically recovering the hidden part of an object and providing a complete 3-D model. Experimental results on various objects show that the proposed approach provides a good solution to these three problems.


Zhe Huang, Jiang Wang, Hongbo Fu, and Rynson Lau. Structured mechanical collage. IEEE Transaction on Visualization and Computer Graphics (TVCG). 20(7): 1076-1082, July 2014

Abstract: We present a method to build 3D structured mechanical collages consisting of numerous elements from the database given artist-designed proxy models. The construction is guided by some graphic design principles, namely unity, variety and contrast. Our results are visually more pleasing than previous works as confirmed by a user study.

[Paper]; [Video]; [Suppl]; [More results]

Xiaoguang Han*, Hongbo Fu, Hanlin Zheng*, Ligang Liu, and Jue Wang. A video-based interface for hand-driven stop motion animation production. IEEE Computer Graphics and Applications (CGA). 33(6): 70-81. 2013.

Abstract: Stop motion is a well-established animation technique, but its production is often laborious and requires craft skills. We present a new video-based interface which is capable of animating the vast majority of everyday objects in stop motion style in a more flexible and intuitive way. It allows animators to perform and capture motions continuously instead of breaking them into small increments and shooting one still picture per increment. More importantly, it permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The key component of our system is a two-phase keyframe-based capturing and processing workflow, assisted by computer vision techniques. We demonstrate that our system is efficient even for amateur animators to generate high quality stop motion animations of a wide variety of objects.

Project page

Bin Liao, Chunxia Xiao, Liqiang Jin, and Hongbo Fu. Efficient feature-preserving local projection operator for geometry reconstruction. Computer Aided Design (CAD). 45(5): 861-874.

Abstract: This paper proposes an efficient and Feature-preserving Locally Optimal Projection operator (FLOP) for geometry reconstruction. Our operator is bilateral weighted, taking both spatial and geometric feature information into consideration for feature-preserving approximation. We then present an accelerated FLOP operator based on the random sampling of the Kernel Density Estimate (KDE), which produces reconstruction results close to those generated using the complete point set data, to within a given accuracy. Additionally, we extend our approach to time-varying data reconstruction, called Spatial-Temporal Locally Optimal Projection operator (STLOP), which efficiently generates temporally coherent and stable features-preserving results. The experimental results show that the proposed algorithms are efficient and robust for feature-preserving geometry reconstruction on both static models and time-varying data sets.


Jingbo Liu, Oscar Kin-Chung Au, Hongbo Fu, and Chiew-Lan Tai. Two-finger gestures for 6DOF manipulation of 3D objects. Computer Graphics Forum (CGF): special issue of Pacific Graphics 2012. 31(7): 2047-2055. (Acceptance rate: 19.6%)

Abstract: Multitouch input devices afford effective solutions for 6DOF (six Degrees of Freedom) manipulation of 3D objects. Mainly focusing on large-size multitouch screens, existing solutions typically require at least three fingers and bimanual interaction for full 6DOF manipulation. However, single-hand, two-finger operations are preferred especially for portable multitouch devices (e.g., popular smartphones) to cause less hand occlusion and relieve the other hand for necessary tasks like holding the devices. Our key idea for full 6DOF control using only two contact fingers is to introduce two manipulation modes and two corresponding gestures by examining the moving characteristics of the two fingers, instead of the number of fingers or the directness of individual fingers as done in previous works. We solve the resulting binary classification problem using a learning-based approach. Our pilot experiment shows that with only two contact fingers and typically unimanual interaction, our technique is comparable to or even better than the state-of-the-art techniques.

Project page

Oscar Kin-Chung Au, Chiew-Lan Tai, and Hongbo Fu. Multitouch gestures for constrained transformation of 3D objects. Computer Graphics Forum (CGF): special issue of Eurographics 2012. 31(2): 651-660. (Acceptance rate: 25%)

Abstract: 3D transformation widgets allow constrained manipulations of 3D objects and are commonly used in many 3D applications for fine-grained manipulations. Since traditional transformation widgets have been mainly designed for mouse-based systems, they are not user friendly for multitouch screens. There is little research on how to use the extra input bandwidth of multitouch screens to ease constrained transformation of 3D objects. This paper presents a small set of multitouch gestures which offers a seamless control of manipulation constraints (i.e., axis or plane) and modes (i.e., translation, rotation or scaling). Our technique does not require any complex manipulation widgets but candidate axes, which are for visualization rather than direct manipulation. Such design not only minimizes visual clutter but also tolerates imprecise touch-based inputs. To further expand our axis-based interaction vocabulary, we introduce intuitive touch gestures for relative manipulations, including snapping and borrowing axes of another object. A user study shows that our technique is more effective than a direct adaption of standard transformation widgets to the tactile paradigm.

Project page

Lei Zhang, Hua Huang, and Hongbo Fu. EXCOL: an EXtract-and-COmplete Layering approach to cartoon animation reusing. IEEE Transaction on Visualization and Computer Graphics (TVCG). 18(7): 1156-1169. 2012.

Abstract: We introduce the EXCOL method (EXtract-and-COmplete Layering) — a novel cartoon animation processing technique to convert a traditional animated cartoon video into multiple semantically meaningful layers. Our technique is inspired by vision-based layering techniques but focuses on shape cues in both the extraction and completion steps to reflect the unique characteristics of cartoon animation. For layer extraction, we define a novel similarity measure incorporating both shape and color of automatically segmented regions within individual frames and propagate a small set of user-specified layer labels among similar regions across frames. By clustering regions with the same labels, each frame is appropriately partitioned into different layers, with each layer containing semantically meaningful content. Then a warping-based approach is used to fill missing parts caused by occlusion within the extracted layers to achieve a complete representation. EXCOL provides a flexible way to effectively reuse traditional cartoon animations with only a small amount of user interaction. It is demonstrated that our EXCOL method is effective and robust, and the layered representation benefits a variety of applications in cartoon animation processing.


Youyi Zheng, Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai. Bilateral normal filtering for mesh denoising. IEEE Transaction on Visualization and Computer Graphics (TVCG). 17(10): 1521-1530. 2011.

Abstract: Decoupling local geometric features from the spatial location of a mesh is crucial for feature-preserving mesh denoising. This paper focuses on first-order features, i.e., facet normals, and presents a simple yet effective anisotropic mesh denoising framework via normal field denoising. Unlike previous denoising methods based on normal filtering, which process normals defined on the Gauss sphere, our method considers normals as a surface signal defined over the original mesh. This allows the design of a novel bilateral normal filter that depends on both spatial distance and signal distance. Our bilateral filter is a more natural extension of the elegant bilateral filter for image denoising than those used in previous bilateral mesh denoising methods. Besides applying this bilateral normal filter in a local, iterative scheme, as common in most of previous works, we present for the first time a global, non-iterative scheme for anisotropic denoising. We show that the former scheme is faster and more effective for denoising extremely noisy meshes while the latter scheme is more robust to irregular surface sampling. We demonstrate that both our feature-preserving schemes generally produce visually and numerically better denoising results than previous methods, especially at challenging regions with sharp features or irregular sampling.


Youyi Zheng, Hongbo Fu, Daniel Cohen-Or, Oscar Kin-Chung Au, and Chiew-Lan Tai. Component-wise controllers for structure-preserving shape manipulation. Computer Graphics Forum (CGF): special issue of Eurographics 2011. 30(2): 563-572. (Acceptance rate: 17.4%)

Abstract: Recent shape editing techniques, especially for man-made models, have gradually shifted focus from maintaining local, low-level geometric features to preserving structural, high-level characteristics like symmetry and parallelism. Such new editing goals typically require a pre-processing shape analysis step to enable subsequent shape editing. Observing that most editing of shapes involves manipulating their constituent components, we introduce component-wise controllers that are adapted to the component characteristics inferred by shape analysis. The controllers capture the natural degrees of freedom of individual components and thus provide an intuitive user interface for editing. A typical model often results in a moderate number of controllers, allowing easy establishment of semantic relations among them by automatic shape analysis supplemented with user interaction. We propose a component-wise propagation algorithm to automatically preserve the established inter-relations while maintaining the defining characteristics of individual controllers and respecting the user-specified modeling constraints. We extend these ideas to a hierarchical setup, allowing the user to adjust the tool complexity with respect to the desired modeling complexity. We demonstrate the effectiveness of our technique on a wide range of engineering models with structural features, often containing multiple connected pieces.

Project page

Oscar Kin-Chung Au, Chiew-Lan Tai, Daniel Cohen-Or, Youyi Zheng, and Hongbo Fu. Electors voting for fast automatic shape correspondence. Computer Graphics Forum (CGF): special issue of Eurographics 2010. 29(2): 645-654. (Acceptance rate: 20%)

Abstract: This paper challenges the difficult problem of automatic semantic correspondence between two given shapes which are semantically similar but possibly geometrically very different (e.g., a dog and an elephant). We argue that the challenging part is the establishment of a sparse correspondence and show that it can be efficiently solved by considering the underlying skeletons augmented with intrinsic surface information. To avoid potentially costly direct search for the best combinatorial match between two sets of skeletal feature nodes, we introduce a statistical correspondence algorithm based on a novel voting scheme, which we call electors voting. The electors are a rather large set of correspondences which then vote to synthesize the final correspondence. The electors are selected via a combinatorial search with pruning tests designed to quickly filter out a vast majority of bad correspondence. This voting scheme is both efficient and insensitive to parameter and threshold settings. The effectiveness of the method is validated by precision-recall statistics with respect to manually defined ground truth. We show that high quality correspondences can be instantaneously established for a wide variety of model pairs, which may have different poses, surface details, and only partial semantic correspondence.

Project page: [Paper]

Wei-Lwun Lu, Kevin P. Murphy, James J. Little, Alla Sheffer, and Hongbo Fu. A hybrid Conditional Random Field for estimating the underlying ground surface from airborne LiDAR data. IEEE Transactions on Geoscience and Remote Sensing(TGARS). 47(8): 2913-2922. 2009.

Abstract: Airborne laser scanners (LiDAR) return point clouds of millions of points imaging large regions. It is very challenging to recover the bare earth, i.e., the surface remaining after the buildings and vegetative cover have been identified and removed; manual correction of the recovered surface is very costly. Our solution combines classification into ground and non-ground with reconstruction of the continuous underlying surface. We define a joint model on the class labels and estimated surface, $p(\vc,\vz|\vx)$, where $c_i \in \{0,1\}$ is the label of point $i$ (ground or non-ground), $z_i$ is the estimated bare-earth surface at point $i$, and $x_i$ is the observed height of point $i$. We learn the parameters of this CRF using supervised learning. The graph structure is obtained by triangulating the point clouds. Given the model, we compute a MAP estimate of the surface, $\arg \max p(\vz|\vx)$, using the EM algorithm, treating the labels $\vc$ as missing data. Extensive testing shows that the recovered surfaces agree very well with those reconstructed from manually corrected data. Moreover, the resulting classification of points is competitive with the best in the literature.

Tiberiu Popa, Qingnan Zhou, Derek Bradley, Vladislav Kraevoy, Hongbo Fu, Alla Sheffer, and Wolfgang Heidrich. Wrinkling captured garments using space-time data-driven deformation. Computer Graphics Forum (CGF): special issue of Eurographics 2009. 28(2): 427--435. (Acceptance rate: 23%)

Abstract: The presence of characteristic fine folds is important for modeling realistic looking virtual garments. While recent garment capture techniques are quite successful at capturing the low-frequency garment shape and motion over time, they often fail to capture the numerous high-frequency folds, reducing the realism of the reconstructed spacetime models. In our work we propose a method for reintroducing fine folds into the captured models using datadriven dynamic wrinkling. We first estimate the shape and position of folds based on the original video footage used for capture and then wrinkle the surface based on those estimates using space-time deformation. Both steps utilize the unique geometric characteristics of garments in general, and garment folds specifically, to facilitate the modeling of believable folds. We demonstrate the effectiveness of our wrinkling method on a variety of garments that have been captured using several recent techniques.

Project page: [Paper]; [Video]

Chunxia Xiao, Hongbo Fu, and Chiew-Lan Tai. Hierarchical aggregation for efficient shape extraction. Springer The Visual Computer (TVC). 25(3): 267-278, Feburary 2009.

Abstract: This paper presents an efficient framework which supports both automatic and interactive shape extraction from surfaces. Unlike most of the existing hierarchical shape extraction methods, which are based on computationally expensive top-down algorithms, our framework employs a fast bottom-up hierarchical method with multiscale aggregation. We introduce a geometric similarity measure, which operates at multiple scales and guarantees that a hierarchy of high-level features are automatically found through local adaptive aggregation. We also show that the aggregation process allows easy incorporation of user-specified constraints, enabling users to interactively extract features of interest. Both our automatic and the interactive shape extraction methods do not require explicit connectivity information, and thus are applicable to unorganized point sets. Additionally, with the hierarchical feature representation, we design a simple and effective method to perform partial shape matching, allowing efficient search of self-similar features across the entire surface. Experiments show that our methods robustly extract visually meaningful features and are significantly faster than related methods.

Paper: [Online First]

Kun Xu, Yuntao Jia, Hongbo Fu, Shimin Hu, and Chiew-Lan Tai. Spherical piecewise constant basis functions for all-frequency precomputed radiance transfer. IEEE Transaction on Visualization and Computer Graphics (TVCG). 14(2): 454-467, March/April, 2008. (IEEE TVCG Featured Article) [citation]

Abstract: This paper presents a novel basis function, called spherical piecewise constant basis function (SPCBF), for precomputed radiance transfer. SPCBFs have several desirable properties: rotatability, ability to represent all-frequency signals, and support for efficient multiple product. By partitioning the illumination sphere into a set of subregions, and associating each subregion with an SPCBF valued 1 inside the region and 0 elsewhere, we precompute the light coefficients using the resulting SPCBFs. We run-time approximate BRDF and visibility coefficients with the same set of SPCBFs through fast lookup of summed-area-table (SAT) and visibility distance table (VDT), respectively. SPCBFs enable new effects such as object rotation in all-frequency rendering of dynamic scenes and on-the-fly BRDF editing under rotating environment lighting. With graphics hardware acceleration, our method achieves real-time frame rates.
Keywords: spherical piecewise constant basis functions, real-time rendering, precomputed radiance transfer

[Paper]; [Video]

Chunxia Xiao, Shu Liu, Hongbo Fu, Chengchun Lin, Chengfang Song, Zhiyong Huang, Fazhi He, and Qunsheng Peng. Video completion and synthesis. Journal of Computer Animation and Virtual World (CAVW): Special Issue of Computer Animation & Social Agents (CASA 2008). 19(3-4): 341-353, 2008.
Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai. Effective derivation of similarity transformations for implicit Laplacian mesh editing, Computer Graphics Forum (CGF). 26(1): 34-45, March 2007. (a previous version appeared as a technical report) [citation]

Abstract: Laplacian coordinates as a local shape descriptor have been employed in mesh editing. As they are encoded in the global coordinate system, they need to be transformed locally to reflect the changed local features of the deformed surface. We present a novel implicit Laplacian editing framework which is linear and effectively captures local rotation information during editing. Directly representing rotation with respect to vertex positions in 3D space leads to a nonlinear system. Instead, we first compute the affine transformations implicitly defined for all the Laplacian coordinates by solving a large sparse linear system, and then extract the rotation and uniform scaling information from each solved affine transformation. Unlike existing differential-based mesh editing techniques, our method produces visually pleasing deformation results under large angle rotations or big-scale translations of handles. Additionally, to demonstrate the advantage of our editing framework, we introduce a new intuitive editing technique, called configuration-independent merging, which produces the same merging result independent of the relative position, orientation, scale of input meshes.
Keywords: mesh editing, similarity invariant, Laplacian coordinates, configuration-independent, mesh deformation, mesh merging

Project page

Oscar Kin-Chung Au, Chiew-Lan Tai, and Ligang Liu, Hongbo Fu. Dual Laplacian editing for meshes, IEEE Transaction on Visualization and Computer Graphics (TVCG). 12(3): 386-395, MAY/JUNE 2006. (a previous version appeared as a technical report) [citation]

Abstract: Recently, differential information as local intrinsic feature descriptors has been used for mesh editing. Given certain user input as constraints, a deformed mesh is reconstructed by minimizing the changes in the differential information. Since the differential information is encoded in a global coordinate system, it must somehow be transformed to fit the orientations of details in the deformed surface, otherwise distortion will appear. We observe that visually pleasing deformed meshes should preserve both local parameterization and geometry details. We propose to encode these two types of information in the dual mesh domain due to the simplicity of the neighborhood structure of dual mesh vertices. Both sets of information are nondirectional and nonlinearly dependent on the vertex positions. Thus, we present a novel editing framework that iteratively updates both the primal vertex positions and the dual Laplacian coordinates to progressively reduce distortion in parametrization and geometry. Unlike previous related work, our method can produce visually pleasing deformations with simple user interaction, requiring only the handle positions, not local frames at the handles.
Keywords: mesh editing, local shape representation, click-and-drag interface, shape preserving, dual Laplacian

Project page

Conference and Exhibition

Yilan Chen*, Hongbo Fu, and Kin-Chung Au. A multi-level sketch-based interface for decorative pattern exploration.SIGGRAPH Asia 2016 Technical Briefs. Macao, Dec. 5-8, 2016.

Abstract: Despite the extensive usage of decorative patterns in art and design, there is a lack of intuitive ways to find a certain type of patterns. In this paper, we present a multi-level sketch-based interface that incorporates low-level geometrical features and high-level structural features, namely reflection, rotation, and translation symmetries, to support decorative pattern exploration at different levels of detail. Four brush tools are designed for users to specify any combination of such features and compose a hybrid search query. The results of a pilot study show that users are able to perform pattern retrieval tasks using our system easily and effectively.

[Paper, Video, Project Page]

Lei Li*, Zhe Huang*, Changqing Zou*, Chiew-Lan Tai, Rynson Lau, Hao Zhang, Ping Tan, and Hongbo Fu. Model-driven sketch reconstruction with structure-oriented retrieval. SIGGRAPH Asia 2016 Technical Briefs. Macao, Dec. 5-8, 2016.

Abstract: We propose an interactive system that aims at lifting a 2D sketch into a 3D sketch with the help of existing models in shape collections. The key idea is to exploit part structure for shape retrieval and sketch reconstruction. We adopt sketch-based shape retrieval and develop a novel matching algorithm which considers structure in addition to traditional shape features. From a list of retrieved models, users select one to serve as a 3D proxy, providing abstract 3D information. Then our reconstruction method transforms the sketch into 3D geometry by back-projection, followed by an optimization procedure based on the Laplacian mesh deformation framework. Preliminary evaluations show that our retrieval algorithm is more effective than a state-of-the-art method and users can create interesting 3D forms of sketches without precise drawing skills.

[Paper]; [Video]

Pui Chung Wong*, Hongbo Fu, and Kening Zhu. Back-Mirror: back-of-device one-handed interaction on smartphones. SIGGRAPH Asia 2016 Symposium on Mobile Graphics and Interactive Applications. Presentation and Demonstrations. Macao, Dec. 5-8, 2016. (Best Demo Honorable Mention)

Abstract: We present Back-Mirror, a low-cost camera-based approach for widening the interaction space on the back surface of a smartphone by using mirror reflection. Back-Mirror consists of two main parts: a smartphone accessory with a mirror that can reflect the back surface to the rear-facing camera of the phone, and a computer-vision algorithm for gesture recognition based on the visual pattern on the back surface. Our approach captures the finger position on the back surface, and tracks finger movement with higher resolution than the previous methods. We further designed a set of intuitive gestures that can be recognized by Back-Mirror, including swiping up, down, left and right, tapping left, middle, right, and holding gestures. Furthermore, we created applications of Back-of-device, such as game, media player, photo gallery, and unlock mechanism, allowing users to experience the use of Back-Mirror gestures in the real-life scenarios.

[Paper, Video, Project Page]

Qingkun Su*, Kin-Chung Au, Pengfei Xu, Hongbo Fu, and Chiew-Lan Tai. 2D-Dragger: Unified Touch-based Target Acquisition with Constant Effective Width. Mobile HCI 2016. Florence, September 6-9, 2016.

Abstract: In this work we introduce 2D-Dragger, a unified touch-based target acquisition technique that enables easy access to small targets in dense regions or distant targets on screens of various sizes. The effective width of a target is constant with our tool, allowing a fixed scale of finger movement for capturing a new target. Our tool is thus insensitive to the distribution and size of the selectable targets, and consistently works well for screens of different sizes, from mobile to wall-sized screens. Our user studies show that overall 2D-Dragger performs the best compared to the state-of-the-art techniques for selecting both near and distant targets of various sizes in different densities.

[Paper, Video]

Quoc Huy Phan*, Jingwan Lu, Paul Asente, Antoni B. Chan, and Hongbo Fu. Patternista: Learning element style compatibility and spatial composition for ring-based layout decoration. Expressive 2016. Lisbon, May 7-9, 2016.

Abstract: Creating aesthetically pleasing decorations for daily objects is a task that requires deep understanding of multiple aspects of object decoration, including color, composition and element compatibility. A designer needs a unique aesthetic style to create artworks that stand out. Although specific subproblems have been studied before, the overall problem of design recommendation and synthesis is still relatively unexplored. In this paper, we propose a flexible data-driven framework to jointly consider two aspects of this design problem: style compatibility and spatial composition. We introduce a ring-based layout model capable of capturing decorative compositions for objects like plates, vases and pots. Our layout representation allows the use of the hidden Markov models (HMM’s) technique to make intelligent design suggestions for each region of a target object in a sequential fashion. We conducted both quantitative and qualitative experiments to evaluate the framework and obtained favorable results.


Quoc Huy Phan*, Hongbo Fu, and Antoni B. Chan. Look closely: Learning exemplar patches for recognizing textiles from product images. ACCV 2014. Singapore, Nov 1-5, 2014.

Abstract: The resolution of product images is becoming higher dues to the rapid development of digital cameras and the Internet. Higher resolution images expose novel feature relationships that did not exist before. For instance, from a large image of a garment, one can observe the overall shape, the wrinkles, and the micro-level details such as sewing lines and weaving patterns. The key idea of our work is to combine features obtained at such largely different scales to improve textile recognition performance. Specifically, we develop a robust semi-supervised model that exploits both micro textures and macro deformable shapes to select representative patches from product images. The selected patches are then used as inputs to conventional texture recognition methods to perform texture recognition. We show that, by learning from human-provided image regions, the method can suggest more discriminative regions that lead to higher categorization rates (+5-7%). We also show that our patch selection method significantly improves the performance of conventional texture recognition methods that usually rely on dense sampling. Our dataset of labeled textile images will be released for further investigation in this emerging field.


Chun Kit Tsui*, Chi Hei Law*, and Hongbo Fu. One-man Orchestra: conducting smartphone orchestra. SIGGRAPH Asia 2014, Emerging Technologoeis. Shenzhen, December, 2014. Best Demo Award

Abstract: This work presents a new platform for performing one-man orchestra. The conductor is the only human involved, who uses traditional bimanual conducting gestures to interactively direct the performance of smartphones instead of human performers in a real-world orchestra. Each smartphone acts as a virtual performer who plays a certain music instrument like piano and violin. Our work not only allows ordinary people to experience music conducting but also provides a training platform so that students can practice music conducting with a unique listening experience.

Project page

Jingbo Liu, Hongbo Fu, and Chiew-Lan Tai. Dynamic sketching: simulating the process of observational drawing. CAe '14: Proceedings of the Workshop on Computational Aesthetics. Vancouver, August 2014.

Abstract: The creation process of a drawing provides a vivid visual progression, allowing the audience to better comprehend the drawing. It also enables numerous stroke-based rendering techniques. In this work we tackle the problem of simulating the process of observational drawing, that is, how people draw lines when sketching a given 3D model. We present a multi-phase drawing framework and the concept of sketching entropy, which provides a unified way to model stroke selection and ordering, both within and across phases. We demonstrate the proposed ideas for the sketching of organic objects and show a visually plausible simulation of their dynamic sketching process.

[Paper]; [Video]

Hongbo Fu, Xiaoguang Han*, and Phan Quoc Huy*. Data-driven suggestions for portrait posing. ACM SIGGRAPH Asia 2013, Technical Briefs, Hong Kong, November, 2013.
Hongbo Fu, Xiaoguang Han*, and Phan Quoc Huy*. Data-driven suggestions for portrait posing. ACM SIGGRAPH Asia 2013, Emerging Technologies, Hong Kong, November, 2013. Best Demo Award. One of the four program highlights among all the accepted works.

Abstract: This work introduces an easy-to-use creativity support tool for portrait posing, which is an important but challenging problem in portrait photography. While it is well known that a collection of sample poses is a source of inspiration, manual browsing is currently the only option to identify a desired pose from a possibly large collection of poses. With our tool, a photographer is able to easily retrieve desired reference poses as guidance or stimulate creativity. We show how our data-driven suggestions can be used to either refine the current pose of a subject or explore new poses. Our pilot study indicates that unskilled photographers find our data-driven suggestions easy to use and useful, though the role of our suggestions in improving aesthetic quality or pose diversity still needs more investigation. Our work takes the first step of using consumer-level depth sensors towards more intelligent cameras for computational photography.

Project page

Wing Ho Andy Li*and Hongbo Fu. BezelCursor: Bezel-initiated cursor for one-handed target acquisition on mobile touch screens. SIGGRAPH Asia 2013, Symposium on Mobile Graphics and Interactive Applications (Demonstrations). Hong Kong, November, 2013.

Abstract: We present BezelCursor, a novel one-handed thumb interaction technique for target acquisition on mobile touch screens of various sizes. Our technique combines bezel-initiated interaction and gestural pointing to solve the problem of limited screen accessibility afforded by the thumb. With a fixed, comfortable grip of a mobile touch device, a user may employ our tool to easily and quickly access a target located anywhere on the screen, using a single fluid action. Unlike the existing technologies, our technique requires no explicit mode switching to invoke and can be smoothly used together with commonly adopted interaction styles such as direct touch and dragging. A user study shows that the performance of our technique is comparable to or even better than that of the state-of-the-art techniques, which, however, suffer from various problems such as explicit mode switching, finger occlusion and/or limited accessibility..

Project page

Lu Chen, Hongbo Fu, Wing Ho Andy Li*, and Chiew-Lan Tai. Scalable maps of random dots for middle-scale locative games. IEEE Virtual Reality 2013, Orlando, Florida, USA, March, 2013.

Abstract: In this work we present a new scalable map for middle-scale locative games. Our map is built upon the recent development of fiducial markers, specifically, the random dot markers. We propose a simple solution, i.e., using a grid of compound markers, to address the scalability problem. Our highly scalable approach is able to generate a middle-scale map on which multiple players can stand and position themselves via mobile cameras in real time. We show how a classic computer game can be effectively adapted to our middle-scale gaming platform.

Project page

Wing Ho Andy Li*and Hongbo Fu. Augmented reflection of reality. SIGGRAPH 2012 Emerging Techologies, Los Angeles, USA, August, 2012.

Abstract: Unlike existing augmented-reality techniques, which typically augment the real world surrounding a user with virtual objects and visualize those effects using various see-through displays, this system focuses on augmenting the user's full body. A half-silvered mirror combines the user's reflection with synthetic data to provide a mixed world. With a live and direct view of the user and the surrounding environment, the system allows the user to intuitively control virtual objects (for example, virtual drums) via the augmented reflection.

Project page

Bin Bao*and Hongbo Fu. Vectorizing line drawings with near-constant line width. IEEE Internationl Conference on Image Processing (ICIP 2012), Orlando, Florida, USA, September-October, 2012.

Abstract: Many line drawing images are composed of lines with near-constant width. Such line width information has seldom been used in the vectorization process. In this work, we show that by enforcing the nearconstant line width constraint, we are able to produce visually more pleasing vectorization results. To this end, we develop a tracingbased approach, allowing dynamic validation of the line width constraint. The key here is to derive correct tracing directions, which are determined based on an automatically estimated orientation field, shape smoothness and the near-constant line width assumption. We have examined our algorithm on a variety of line drawing images with different shape and topology complexity. We show that our solution outperforms the state-of-the-art vectorization software systems including WinTopo and Adobe Illustrator, especially at regions where multiple lines meet and thus are difficult to locally distinguish from each other.

Project page

Wei-Lwun Lu, James J. Little, Alla Sheffer, and Hongbo Fu. Deforestation: Extracting 3D bare-earth surface from airborne LiDAR data. The Fifth Canadian Conference on Computer and Robot Vision (CRV 2008), pages 203-210, Windsor, Canada, May 2008.

Abstract: Bare-earth identification selects points from a LiDAR point cloud so that they can be interpolated to form a representation of the ground surface from which structures, vegetation, and other cover have been removed. We triangulate the point cloud and segment the triangles into flat and steep triangles using a Discriminative Random Field (DRF) that uses a data-dependent label smoothness term.Regions are classified into ground and non-ground based on steepness in the regions and ground points are selected as points on ground triangles. Various post-processing steps are used to further identify flat regions as rooftops and treetops, and eliminate isolated features that affect the surface interpolation.The performance of our algorithm is evaluated in its effectiveness at labeling ground points and, more importantly, at determining the extracted bare-earth surface. Extensive comparison shows the effectiveness of the strategy at selecting ground points leading to good fit in the triangulated mesh derived from the ground points.


Hongbo Fu, Yichen Wei, Chiew-Lan Tai, and Long Quan. Sketching hairstyles, EUROGRAPHICS Workshop on Sketch-Based Interfaces and Modeling (SBIM 2007), pages 31-36, UC Riverside, USA, July 2008. [citation]

Abstract: This paper presents an intuitive sketching interface for interactive hairstyle design, made possible by an efficient numerical updating scheme. The user portrays the global shape of a desired hairstyle through a few 3D style curves that are manipulated by interactively sketching freeform strokes. Our approach is based on a vector field representation that solves a sparse linear system with the style curves acting as boundary constraints. The key observation is that the specific sparseness pattern of the linear system enables an efficient incremental numerical updating scheme. This gives rise to a sketching interface that provides interactive visual feedback to the user. Interesting hairstyles can be easily created in minutes.
Keywords: vector field editing, Cholesky modification, hairstyle sketching


Xiaohuang Huang, Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai. Optimal boundaries for Poisson mesh merging, ACM Solid and Physical Modeling Symposium 2007 (SPM 2007), pages 35-40, Beijing, China, June 2007. (Acceptance rate: 26.6%) [citation]

Abstract: Existing Poisson mesh editing techniques mainly focus on designing schemes to propagate deformation from a given boundary condition to a region of interest. Although solving the Poisson system in the least-squares sense distributes the distortion errors over the entire region of interest, large deformation in the boundary condition might still lead to severely distorted results. We propose to optimize the boundary condition (the merging boundary) for Poisson mesh merging. The user needs only to casually mark a source region and a target region. Our algorithm automatically searches for an optimal boundary condition within the marked regions such that the change of the found boundary during merging is minimal in terms of similarity transformation. Experimental results demonstrate that our merging tool is easy to use and produces visually better merging results than unoptimized techniques.
Keywords: mesh merging, Poisson mesh editing, optimal boundaries


Xiangye Xiao, Qiong Luo, Dan Hong, and Hongbo Fu. Slicing*-tree based web page transformation for small displays. ACM Fourteenth Conference on Information and Knowledge Management (CIKM 2005), Bremen, Germany, 2005. (Journal version appears in ACM Transactions on the Web) [citation]
Hongbo Fu, Chiew-Lan Tai, and Oscar Kin-Chung Au. Morphing with Laplacian coordinates and spatial-temporal texture, In Proceedings of Pacific Graphics 2005 (PG 2005), pages 100-102, Macao, China, October 2005. (Acceptance rate: 35.5%) [citation]

Abstract: Given 2D or 3D shapes, the objective of morphing is to create a sequence of gradually changed shapes and to keep individual shapes as visually pleasing as possible. In this paper, we present a morphing technique for 2D planar curves (open or closed) by coherently interpolating the source and target Laplacian coordinates. Although the Laplacian coordinates capture the geometric features of a shape, they are not rotation-invariant. By applying as-rigid-as-possible transformations with rotation coherence constraints to the Laplacian coordinates, we make the intermediate morphing shapes highly appealing. Our method successfully avoids local self-intersections. We also propose to interpolate the textures within simple closed curves using a spatial-temporal structure. In existing texture morphing techniques, textures are encoded by either skeleton structures or triangulations. Therefore, the morphing results depend on the quality of these skeleton structures or triangulations. Given two simple closed curves and their interpolated shapes, our method automatically finds a one-to-one mapping between the source and target textures without any skeleton or triangulation and guarantees that neighboring pixels morph coherently.
Keywords: Laplacian coordinates, spatial-temporal texture, shape morphing, as-rigid-as-possible


Oscar Kin-Chung Au, Chiew-Lan Tai, Hongbo Fu, and Ligang Liu. Mesh editing with curvature flow Laplacian, Symposium on Geometry Processing 2005 (SGP 2005), Vienna, Austria, July, 2005 (Poster). [citation]

Introduction: Differential coordinates are essentially vectors encoded in the global coordinate system. Since the local features on a mesh are deformed and rotated during editing, the differential coordinates must somehow be transformed to match the desired new orientations, otherwise distortion like shearing and stretching will occur. This transformation problem is basically a chicken-and-egg problem: the reconstruction of the deformed surface requires properly oriented differential coordinates, while the reorientation of these coordinates depend on the unknown deformed mesh. We present an iterative Laplacian-based editing framework to solve this transformation problem. The only user input required are the positions of the handles, not their local frames. Thus our system supports simple point handle editing. Our iterative updating process finds the best orientations of local features, including the orientations at the point handles.

[Paper]; [Poster]

Hongbo Fu, Chiew-Lan Tai, and Hongxin Zhang. Topology-free cut-and-paste editing over meshes, Geometric Modeling and Processing 2004 (GMP 2004), pages 173 – 182, Beijing, China, April, 2004. (Acceptance rate: 23.3%) [citation]

Abstract: Existing cut-and-paste editing methods over meshes are inapplicable to regions with non-zero genus. To overcome this drawback, we propose a novel method in this paper. Firstly, a base surface passing through the boundary vertices of the selected region is constructed using the boundary triangulation technique. Considering the connectivity between the neighboring vertices, a new detail encoding technique is then presented based on surface parameterization. Finally, the detail representation is transferred onto the target surface via the base surface. This strategy of creating a base surface as a detail carrier allows us to paste features of non-zero genus onto the target surface. By taking the physical relationship of adjacent vertices into account, our detail encoding method produces more natural and less distorted results. Therefore, our elegant method not only can eliminate the dependence on the topology of the selected feature, but also reduces the distortion effectively during pasting.
Keywords: topology-free, cut-and-paste, mesh editing



Book & Thesis

Hongbo Fu. Advanced programming in Delphi 6.0, Publishing House of Electronics Industry, March 2002, ISBN 7-900084-62-2 (in Chinese). Buy this book at dearbook.
Brief introduction: This book presents the essence of Delphi programming through a variety of advanced examples. The examples focus on the development of multimedia and Internet applications, for example, OpenGL, Indy components, XML, Web Broker and WebSnap techniques.

Hongbo Fu. Differential methods for intuitive 3D shape modeling, Ph.D. Thesis, 20 July 2007.

Thesis Committee

Thesis (PDF: 5.7M)

Hongbo Fu. Magnetocardiography signal denoising techniques. Undergraduate Thesis, July 2002.


Technical Report

Hongbo Fu. Differential methods for intuitive 3D shape modeling, PhD Thesis Proposal, 21 May 2007.
Abstract: Recently, differential information as local intrinsic feature descriptors has been used for mesh editing. Given certain user input as constraints, a deformed mesh is reconstructed by minimizing the changes in the differential information. Since the differential information is encoded in the global coordinate system, it must somehow be transformed to fit the orientation of details in the deformed surface, otherwise distortion will appear. We observe that visually desired deformed meshes should preserve both local parameterization and geometry details. To find suitable representations for these two types of information, we exploit certain properties of the curvature flow Laplacian operator. Specifically, we consider the coefficients of Laplacian operator as the parametrization information and the magnitudes of the Laplacian coordinates as the geometry information. Both sets of information are non-directional and non-linearly dependent on the vertex positions. Thus, we propose a new editing framework that iteratively updates both the vertex positions and the Laplacian coordinates to reduce distortion in parametrization and geometry. Our method can produce visually pleasing deformation with simple user interaction, requiring only the handle positions, not the local frames at the handles. In addition, since the magnitudes of the Laplacian coordinates approximate the integrated mean curvatures, our framework is useful for modifying mesh geometry via updating the curvature field. We demonstrate this use in spherical parameterization and non-shrinking smoothing.
Hongbo Fu, Chiew-Lan Tai. Mesh editing with affine-invariant Laplacian coordinates, Technical report, HKUST-CS05-01, January 2005.
Abstract: Differential coordinates as an intrinsic surface representation capture geometric details of surface. However, differential coordinates alone cannot achieve desirable editing results, because they are not affine invariant. In this paper, we present a novel method that makes the Laplacian coordinates completely affine-invariant during editing. For each vertex of a surface to be edited, we compute the Laplacian coordinate and implicitly define a local affine transformation that is dependent on the unknown edited vertices. During editing, both the resulting surface and the implicit local affine transformations are solved simultaneously through a constrained optimization. The underlying mathematics of our method is a set of linear Partial Differential Equations (PDEs) with a generalized boundary condition. The main computation involved comes from factorizing the resulting sparse system of linear equations, which is performed only once. After that, back substitutions are executed to interactively respond to user manipulations. We propose a new editing technique, called pose-independent merging, to demonstrate the advantages of the affine-invariant Laplacian coordinates. In the the same framework, large-scale mesh deformation and pose-dependent mesh merging are also presented.
Hongbo Fu. A survey of editing techniques on surface models and point-based models, PhD Qualifying Examination, 19 December 2003.

My co-authors (by alphabetical order)

Hong Kong Oscar Kin-Chung Au (CityU), Long Quan (HKUST), Chiew-Lan Tai (HKUST)
Mainland China Shimin Hu (Tsinghua), Ligang Liu (ZJU), Qunsheng Peng (ZJU), Yichen Wei (MSRA), Chunxia Xiao (WHU), Kun Xu (Tsinghua), Hongxin Zhang(ZJU)
Taiwan Yu-Shuen Wang (NCKU), Tong-Yee Lee (NCKU)
Canada Derek Bradley (UBC), Wolfgang Heidrich (UBC), Wei-Lwun Lu (UBC), Vladislav Kraevoy (UBC), Jim Little (UBC), Kevin Murphy (UBC), Tiberiu Popa (UBC), Alla Sheffer (UBC)
Germany Hans-Peter Seidel (MPII)
Israel Daniel Cohen-Or (Tel Aviv), Gideon Dror (Academic College of Tel-Aviv-Yaffo)
United States Yuntao Jia (UIUC), Olga Sorkine (NYU)
All rights reserved. Copyright©2002-2017 Hongbo Fu.
W3C Valid CSSW3C Valid HTML 4.01