## SIGGRAPH / SA (12), CHI (3), UIST (2), TVCG (9)

Journal

 Qiang Fu*, Xiaowu Chen, Xiaoyu Su, and Hongbo Fu. Pose-inspired shape synthesis and functional hybrid. IEEE Transaction on Visualization and Computer Graphics (TVCG). 23(12): 2574-2585. December 2017. Abstract: We introduce a shape synthesis approach especially for functional hybrid creation that can be potentially used by a human operator under a certain pose. Shape synthesis by reusing parts in existing models has been an active research topic in recent years. However, how to combine models across different categories to design multi-function objects remains challenging, since there is no natural correspondence between models across different categories. We tackle this problem by introducing a human pose to describe object affordance which establishes a bridge between cross-class objects for composite design. Specifically, our approach first identifies groups of candidate shapes which provide affordances desired by an input human pose, and then recombines them as well-connected composite models. Users may control the design process by manipulating the input pose, or optionally specifying one or more desired categories. We also extend our approach to be used by a single operator with multiple poses or by multiple human operators. We show that our approach enables easy creation of nontrivial, interesting synthesized models. [Paper, Video] Quoc Huy Phan*, Hongbo Fu, and Antoni Chan. Color Orchestra: Ordering color palettes for interpolation and prediction. IEEE Transaction on Visualization and Computer Graphics (TVCG). Accepted for publication. Abstract: Color theme or color palette can deeply influence the quality and the feeling of a photograph or a graphical design. Although color palettes may come from different sources such as online crowd-sourcing, photographs and graphical designs, in this paper, we consider color palettes extracted from fine art collections, which we believe to be an abundant source of stylistic and unique color themes. We aim to capture color styles embedded in these collections by means of statistical models and to build practical applications upon these models. As artists often use their personal color themes in their paintings, making these palettes appear frequently in the dataset, we employed density estimation to capture the characteristics of palette data. Via density estimation, we carried out various predictions and interpolations on palettes, which led to promising applications such as photo-style exploration, real-time color suggestion, and enriched photo recolorization. It was, however, challenging to apply density estimation to palette data as palettes often come as unordered sets of colors, which make it difficult to use conventional metrics on them. To this end, we developed a divide-and-conquer sorting algorithm to rearrange the colors in the palettes in a coherent order, which allows meaningful interpolation between color palettes. To confirm the performance of our model, we also conducted quantitative experiments on datasets of digitized paintings collected from the Internet and received favorable results. Sheng Yang, Kang Chen, Minghua Liu, Hongbo Fu, and Shi-Min Hu. Saliency-aware real-time volumetric fusion for object reconstruction. Computer Graphics Forum (Proceedings of Pacific Graphics 2017). 36(7): 167-174. October 2017 Abstract: We present a real-time approach for acquiring 3D objects with high fidelity using hand-held consumer-level RGB-D scanning devices. Existing real-time reconstruction methods typically do not take the point of interest into account, and thus might fail to produce clean reconstruction results of desired objects due to distracting objects or backgrounds. In addition, any changes in background during scanning, which can often occur in real scenarios, can easily break up the whole reconstruction process. To address these issues, we incorporate visual saliency into a traditional real-time volumetric fusion pipeline. Salient regions detected from RGB-D frames suggest user-intended objects, and by understanding user intentions our approach can put more emphasis on important targets, and meanwhile, eliminate disturbance of non-important objects. Experimental results on real-world scans demonstrate that our system is capable of effectively acquiring geometric information of salient objects in cluttered real-world scenes, even if the backgrounds are changing. [Paper, Video] Sheng Yang, Jie Xu, Kang Chen, and Hongbo Fu. View suggestion for interactive segmentation of indoor scenes. Comutational Visual Media. 3: 131. 2017 Abstract: Point cloud segmentation is a fundamental problem. Due to the complexity of real-world scenes and the limitations of 3D scanners, interactive segmentation is currently the only way to cope with all kinds of point clouds. However, interactively segmenting complex and large-scale scenes is very time-consuming. In this paper, we present a novel interactive system for segmenting point cloud scenes. Our system automatically suggests a series of camera views, in which users can conveniently specify segmentation guidance. In this way, users may focus on specifying segmentation hints instead of manually searching for desirable views of unsegmented objects, thus significantly reducing user effort. To achieve this, we introduce a novel view preference model, which is based on a set of dedicated view attributes, with weights learned from a user study. We also introduce support relations for both graph-cut-based segmentation and finding similar objects. Our experiments show that our segmentation technique helps users quickly segment various types of scenes, outperforming alternative methods. [Paper, Video] Shi-Sheng Huang, Hongbo Fu, Lin-Yu Wei, and Shi-Min Hu. Support Substructures: Support-induced part-level structural representation. IEEE Transaction on Visualization and Computer Graphics (TVCG). 22(8): 2024-36. Aug 2016 Abstract: In this work we explore a support-induced structural organization of object parts. We introduce the concept of support substructures, which are special subsets of object parts with support and stability. A bottom-up approach is proposed to identify such substructures in a support relation graph. We apply the derived high-level substructures to part-based shape reshuffling between models, resulting in nontrivial functionally plausible model variations that are difficult to achieve with symmetry-induced substructures by the state of the art. We also show how to automatically or interactively turn a single input model to new functionally plausible shapes by structure rearrangement and synthesis, enabled by support substructures. To the best of our knowledge no single existing method has been designed for all these applications. [Paper, Video] Qiang Fu*, Xiaowu Chen, Xiaoyu Su, Jia Li, and Hongbo Fu. Structure-adaptive Shape Editing for Man-made Objects. Computer Graphics Forum (Proceedings of Eurographics 2016). 35(2): 27-36. May 9-13, 2016. Abstract: One of the challenging problems for shape editing is to adapt shapes with diversified structures for various editing needs. In this paper we introduce a shape editing approach that automatically adapts the structure of a shape being edited with respect to user inputs. Given a category of shapes, our approach first classifies them into groups based on the constituent parts. The group-sensitive priors, including both inter-group and intra-group priors, are then learned through statistical structure analysis and multivariate regression. By using these priors, the inherent characteristics and typical variations of shape structures can be well captured. Based on such group-sensitive priors, we propose a framework for real-time shape editing, which adapts the structure of shape to continuous user editing operations. Experimental results show that the proposed approach is capable of both structure-preserving and structure-varying shape editing. [Paper, Video] Wing Ho Andy Li*, Kening Zhu, and Hongbo Fu. Exploring the design space of bezel-initiated gestures for mobile interaction. International Journal of Mobile Human Computer Interaction. Volume 9 Issue 1, Jan. 2017. Abstract: Bezel enables useful gestures supplementary to primary surface gestures for mobile interaction. However, the existing works mainly focus on researcher-designed gestures, which utilized only a subset of the design space. In order to explore the design space, we present a modified elicitation study, during which the participants designed bezel-initiated gestures for four sets of tasks. Different from traditional elicitation studies, ours encourages participants to design new gestures. We do not focus on individual tasks or gestures, but perform a detailed analysis of the collected gestures as a whole, and provide findings which could benefit designers of bezel-initiated gestures. [Paper, Video] Shi-Sheng Huang, Hongbo Fu, and Shi-Ming Hu. Structure guided interior scene synthesis via graph matching. Graphical Models. Volume 85, Pages 46-55, May 2016. Abstract: We present a method for reshuffle-based 3D interior scene synthesis guided by scene structures. Given several 3D scenes, we form each 3D scene as a structure graph associated with a relationship set. Considering both the object similarity and relation similarity, we then establish a furniture-object-based matching between scene pairs via graph matching. Such a matching allows us to merge the structure graphs into a unified structure, i.e., Augmented Graph (AG). Guided by the AG , we perform scene synthesis by reshuffling objects through three simple operations, i.e., replacing, growing and transfer. A synthesis compatibility measure considering the environment of the furniture objects is also introduced to filter out poor-quality results. We show that our method is able to generate high-quality scene variations and out- performs the state of the art. Paper Qiang Fu*, Xiaowu Chen, Xiaoyu Su, and Hongbo Fu. Natural lines inspired 3D shape re-design. Graphical Models. Volume 85, Pages 1-10, May 2016. Abstract: We introduce an approach for re-designing 3D shapes inspired by natural lines such as the contours and skeletons extracted from the natural objects in images. Designing an artistically creative and visually pleasing model is not easy for novice users. In this paper, we propose to convert such a design task to a computational procedure. Given a 3D object, we first compare its editable lines with various lines extracted from the image database to explore the candidate reference lines. Then a parametric deformation method is employed to reshape the 3D object guided by the reference lines. We show that our approach enables users to quickly create non-trivial and interesting re-designed 3D objects. We also conduct a user study to validate the usability and effectiveness of our approach. Paper Wing Ho Andy Li*, Hongbo Fu, and Kening Zhu. BezelCursor: Bezel-initiated cursor for one-handed target acquisition on mobile touch screens. International Journal of Mobile Human Computer Interaction. Volume 8, Issue 1, Jan-March 2016. Abstract: We present BezelCursor, a novel one-handed thumb interaction technique for target acquisition on mobile touch screens of various sizes. Our technique combines bezel-initiated interaction and pointing gesture to solve the problem of limited screen accessibility afforded by the thumb. With a fixed, comfortable grip of a mobile touch device, a user may employ our tool to easily and quickly access a target located anywhere on the screen, using a single fluid action. Unlike the existing technologies, our technique requires no explicit mode switching to invoke and can be smoothly used together with commonly adopted interaction styles such as direct touch and dragging. Our user study shows that BezelCursor requires less grip adjustment, and is more accurate or faster than the state-of-the-art techniques when using a fixed secure grip. Project page Quoc Huy Phan*, Hongbo Fu, and Antoni Chan. FlexyFont: Learning transferring rules for flexible typeface synthesis. Computer Graphics Forum (Proceedings of Pacific Graphics 2015). 34(7): 245-256. Oct. 2015 Abstract: Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval-based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real-world design. [Paper, Video] Xiaoyu Su, Xiaowu Chen, Qiang Fu, and Hongbo Fu. Cross-class 3D object synthesis guided by reference examples. Computers & Graphics (Special Issue on CAD/Graphics 2015). 54: 145-153. Feb. 2016. Best Paper Award Abstract: Re-combining parts of existing 3D object models is an interesting and efficient technique to create novel shape collections. However, due to the lack of direct parts’ correspondence across different shape families, such data-driven modeling approaches in literature are limited to the synthesis of in-class shapes only. To address the problem, this paper proposes a novel approach to create 3D shapes via re-combination of cross-category object parts from an existing database of different model families. In our approach, a reference shape containing multi-functional constituent parts is pre-specified by users, and its design style is then reused to guide the creation process. To this end, the functional substructures are first extracted for the reference shape. After that, we explore a series of category pairs which are potential replacements for the functional substructures of the reference shape to make interesting variations. We demonstrate our ideas using various examples, and present a user study to evaluate the usability and efficiency of our technique. [Paper] Changqing Zou*, Shifeng Chen, Hongbo Fu, and Jianzhuang Liu. Progressive 3D reconstruction of planar-faced manifold objects with DRF-based line drawing decomposition. IEEE Transaction on Visualization and Computer Graphics (TVCG). 21(2): 252-263. Feb. 2015. Abstract: This paper presents an approach for reconstructing polyhedral objects from single-view line drawings. Our approach separates a complex line drawing representing a manifold object into a series of simpler line drawings, based on the degree of reconstruction freedom (DRF). We then progressively reconstruct a complete 3D model from these simpler line drawings. Our experiments show that our decomposition algorithm is able to handle complex drawings which are challenging for the state of the art. The advantages of the presented progressive 3D reconstruction method over the existing reconstruction methods in terms of both robustness and efficiency are also demonstrated. [Paper] Changqing Zou*, Xiaojiang Peng, Hao Lv, Shifeng Chen, Hongbo Fu, and Jianzhuang Liu. Sketch-based 3-D modeling for piecewise planar objects in single images. Computers & Graphics (Special Issue of SMI 2014). 46(2015): 130-137. Feb. 2015. Abstract: 3-D object modeling from single images has many applications in computer graphics and multimedia. Most previous 3-D modeling methods which directly recover 3-D geometry from single images require user interactions during the whole modeling process. In this paper, we propose a semi-automatic 3-D modeling approach to recover accurate 3-D geometry from a singe image of a piecewise planar object with less user interaction. Our approach concentrates on these three aspects: 1) requiring rough sketch input only, 2) accurate modeling for a large class of objects, and 3) automatically recovering the hidden part of an object and providing a complete 3-D model. Experimental results on various objects show that the proposed approach provides a good solution to these three problems. [Paper] Zhe Huang, Jiang Wang, Hongbo Fu, and Rynson Lau. Structured mechanical collage. IEEE Transaction on Visualization and Computer Graphics (TVCG). 20(7): 1076-1082, July 2014 Abstract: We present a method to build 3D structured mechanical collages consisting of numerous elements from the database given artist-designed proxy models. The construction is guided by some graphic design principles, namely unity, variety and contrast. Our results are visually more pleasing than previous works as confirmed by a user study. [Paper]; [Video]; [Suppl]; [More results] Xiaoguang Han*, Hongbo Fu, Hanlin Zheng*, Ligang Liu, and Jue Wang. A video-based interface for hand-driven stop motion animation production. 33(6): 70-81. 2013. Abstract: Stop motion is a well-established animation technique, but its production is often laborious and requires craft skills. We present a new video-based interface which is capable of animating the vast majority of everyday objects in stop motion style in a more flexible and intuitive way. It allows animators to perform and capture motions continuously instead of breaking them into small increments and shooting one still picture per increment. More importantly, it permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The key component of our system is a two-phase keyframe-based capturing and processing workflow, assisted by computer vision techniques. We demonstrate that our system is efficient even for amateur animators to generate high quality stop motion animations of a wide variety of objects. Project page Bin Liao, Chunxia Xiao, Liqiang Jin, and Hongbo Fu. Efficient feature-preserving local projection operator for geometry reconstruction. . 45(5): 861-874. Abstract: This paper proposes an efficient and Feature-preserving Locally Optimal Projection operator (FLOP) for geometry reconstruction. Our operator is bilateral weighted, taking both spatial and geometric feature information into consideration for feature-preserving approximation. We then present an accelerated FLOP operator based on the random sampling of the Kernel Density Estimate (KDE), which produces reconstruction results close to those generated using the complete point set data, to within a given accuracy. Additionally, we extend our approach to time-varying data reconstruction, called Spatial-Temporal Locally Optimal Projection operator (STLOP), which efficiently generates temporally coherent and stable features-preserving results. The experimental results show that the proposed algorithms are efficient and robust for feature-preserving geometry reconstruction on both static models and time-varying data sets. [Paper] Jingbo Liu, Oscar Kin-Chung Au, Hongbo Fu, and Chiew-Lan Tai. Two-finger gestures for 6DOF manipulation of 3D objects. Computer Graphics Forum (CGF): special issue of Pacific Graphics 2012. 31(7): 2047-2055. (Acceptance rate: 19.6%) Abstract: Multitouch input devices afford effective solutions for 6DOF (six Degrees of Freedom) manipulation of 3D objects. Mainly focusing on large-size multitouch screens, existing solutions typically require at least three fingers and bimanual interaction for full 6DOF manipulation. However, single-hand, two-finger operations are preferred especially for portable multitouch devices (e.g., popular smartphones) to cause less hand occlusion and relieve the other hand for necessary tasks like holding the devices. Our key idea for full 6DOF control using only two contact fingers is to introduce two manipulation modes and two corresponding gestures by examining the moving characteristics of the two fingers, instead of the number of fingers or the directness of individual fingers as done in previous works. We solve the resulting binary classification problem using a learning-based approach. Our pilot experiment shows that with only two contact fingers and typically unimanual interaction, our technique is comparable to or even better than the state-of-the-art techniques. Project page Oscar Kin-Chung Au, Chiew-Lan Tai, and Hongbo Fu. Multitouch gestures for constrained transformation of 3D objects. Computer Graphics Forum (CGF): special issue of Eurographics 2012. 31(2): 651-660. (Acceptance rate: 25%) Abstract: 3D transformation widgets allow constrained manipulations of 3D objects and are commonly used in many 3D applications for fine-grained manipulations. Since traditional transformation widgets have been mainly designed for mouse-based systems, they are not user friendly for multitouch screens. There is little research on how to use the extra input bandwidth of multitouch screens to ease constrained transformation of 3D objects. This paper presents a small set of multitouch gestures which offers a seamless control of manipulation constraints (i.e., axis or plane) and modes (i.e., translation, rotation or scaling). Our technique does not require any complex manipulation widgets but candidate axes, which are for visualization rather than direct manipulation. Such design not only minimizes visual clutter but also tolerates imprecise touch-based inputs. To further expand our axis-based interaction vocabulary, we introduce intuitive touch gestures for relative manipulations, including snapping and borrowing axes of another object. A user study shows that our technique is more effective than a direct adaption of standard transformation widgets to the tactile paradigm. Project page Lei Zhang, Hua Huang, and Hongbo Fu. EXCOL: an EXtract-and-COmplete Layering approach to cartoon animation reusing. 18(7): 1156-1169. 2012. Abstract: We introduce the EXCOL method (EXtract-and-COmplete Layering) — a novel cartoon animation processing technique to convert a traditional animated cartoon video into multiple semantically meaningful layers. Our technique is inspired by vision-based layering techniques but focuses on shape cues in both the extraction and completion steps to reflect the unique characteristics of cartoon animation. For layer extraction, we define a novel similarity measure incorporating both shape and color of automatically segmented regions within individual frames and propagate a small set of user-specified layer labels among similar regions across frames. By clustering regions with the same labels, each frame is appropriately partitioned into different layers, with each layer containing semantically meaningful content. Then a warping-based approach is used to fill missing parts caused by occlusion within the extracted layers to achieve a complete representation. EXCOL provides a flexible way to effectively reuse traditional cartoon animations with only a small amount of user interaction. It is demonstrated that our EXCOL method is effective and robust, and the layered representation benefits a variety of applications in cartoon animation processing. [Paper] Youyi Zheng, Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai. Bilateral normal filtering for mesh denoising. 17(10): 1521-1530. 2011. Abstract: Decoupling local geometric features from the spatial location of a mesh is crucial for feature-preserving mesh denoising. This paper focuses on first-order features, i.e., facet normals, and presents a simple yet effective anisotropic mesh denoising framework via normal field denoising. Unlike previous denoising methods based on normal filtering, which process normals defined on the Gauss sphere, our method considers normals as a surface signal defined over the original mesh. This allows the design of a novel bilateral normal filter that depends on both spatial distance and signal distance. Our bilateral filter is a more natural extension of the elegant bilateral filter for image denoising than those used in previous bilateral mesh denoising methods. Besides applying this bilateral normal filter in a local, iterative scheme, as common in most of previous works, we present for the first time a global, non-iterative scheme for anisotropic denoising. We show that the former scheme is faster and more effective for denoising extremely noisy meshes while the latter scheme is more robust to irregular surface sampling. We demonstrate that both our feature-preserving schemes generally produce visually and numerically better denoising results than previous methods, especially at challenging regions with sharp features or irregular sampling. [Paper] Youyi Zheng, Hongbo Fu, Daniel Cohen-Or, Oscar Kin-Chung Au, and Chiew-Lan Tai. Component-wise controllers for structure-preserving shape manipulation. Computer Graphics Forum (CGF): special issue of Eurographics 2011. 30(2): 563-572. (Acceptance rate: 17.4%) Abstract: Recent shape editing techniques, especially for man-made models, have gradually shifted focus from maintaining local, low-level geometric features to preserving structural, high-level characteristics like symmetry and parallelism. Such new editing goals typically require a pre-processing shape analysis step to enable subsequent shape editing. Observing that most editing of shapes involves manipulating their constituent components, we introduce component-wise controllers that are adapted to the component characteristics inferred by shape analysis. The controllers capture the natural degrees of freedom of individual components and thus provide an intuitive user interface for editing. A typical model often results in a moderate number of controllers, allowing easy establishment of semantic relations among them by automatic shape analysis supplemented with user interaction. We propose a component-wise propagation algorithm to automatically preserve the established inter-relations while maintaining the defining characteristics of individual controllers and respecting the user-specified modeling constraints. We extend these ideas to a hierarchical setup, allowing the user to adjust the tool complexity with respect to the desired modeling complexity. We demonstrate the effectiveness of our technique on a wide range of engineering models with structural features, often containing multiple connected pieces. Oscar Kin-Chung Au, Chiew-Lan Tai, Daniel Cohen-Or, Youyi Zheng, and Hongbo Fu. Electors voting for fast automatic shape correspondence. Computer Graphics Forum (CGF): special issue of Eurographics 2010. 29(2): 645-654. (Acceptance rate: 20%) Abstract: This paper challenges the difficult problem of automatic semantic correspondence between two given shapes which are semantically similar but possibly geometrically very different (e.g., a dog and an elephant). We argue that the challenging part is the establishment of a sparse correspondence and show that it can be efficiently solved by considering the underlying skeletons augmented with intrinsic surface information. To avoid potentially costly direct search for the best combinatorial match between two sets of skeletal feature nodes, we introduce a statistical correspondence algorithm based on a novel voting scheme, which we call electors voting. The electors are a rather large set of correspondences which then vote to synthesize the final correspondence. The electors are selected via a combinatorial search with pruning tests designed to quickly filter out a vast majority of bad correspondence. This voting scheme is both efficient and insensitive to parameter and threshold settings. The effectiveness of the method is validated by precision-recall statistics with respect to manually defined ground truth. We show that high quality correspondences can be instantaneously established for a wide variety of model pairs, which may have different poses, surface details, and only partial semantic correspondence. Project page: [Paper] Wei-Lwun Lu, Kevin P. Murphy, James J. Little, Alla Sheffer, and Hongbo Fu. A hybrid Conditional Random Field for estimating the underlying ground surface from airborne LiDAR data. IEEE Transactions on Geoscience and Remote Sensing(TGARS). 47(8): 2913-2922. 2009. Abstract: Airborne laser scanners (LiDAR) return point clouds of millions of points imaging large regions. It is very challenging to recover the bare earth, i.e., the surface remaining after the buildings and vegetative cover have been identified and removed; manual correction of the recovered surface is very costly. Our solution combines classification into ground and non-ground with reconstruction of the continuous underlying surface. We define a joint model on the class labels and estimated surface, $p(\vc,\vz|\vx)$, where $c_i \in \{0,1\}$ is the label of point $i$ (ground or non-ground), $z_i$ is the estimated bare-earth surface at point $i$, and $x_i$ is the observed height of point $i$. We learn the parameters of this CRF using supervised learning. The graph structure is obtained by triangulating the point clouds. Given the model, we compute a MAP estimate of the surface, $\arg \max p(\vz|\vx)$, using the EM algorithm, treating the labels $\vc$ as missing data. Extensive testing shows that the recovered surfaces agree very well with those reconstructed from manually corrected data. Moreover, the resulting classification of points is competitive with the best in the literature. Tiberiu Popa, Qingnan Zhou, Derek Bradley, Vladislav Kraevoy, Hongbo Fu, Alla Sheffer, and Wolfgang Heidrich. Wrinkling captured garments using space-time data-driven deformation. Computer Graphics Forum (CGF): special issue of Eurographics 2009. 28(2): 427--435. (Acceptance rate: 23%) Abstract: The presence of characteristic fine folds is important for modeling realistic looking virtual garments. While recent garment capture techniques are quite successful at capturing the low-frequency garment shape and motion over time, they often fail to capture the numerous high-frequency folds, reducing the realism of the reconstructed spacetime models. In our work we propose a method for reintroducing fine folds into the captured models using datadriven dynamic wrinkling. We first estimate the shape and position of folds based on the original video footage used for capture and then wrinkle the surface based on those estimates using space-time deformation. Both steps utilize the unique geometric characteristics of garments in general, and garment folds specifically, to facilitate the modeling of believable folds. We demonstrate the effectiveness of our wrinkling method on a variety of garments that have been captured using several recent techniques. page: [Paper]; [Video] Chunxia Xiao, Hongbo Fu, and Chiew-Lan Tai. Hierarchical aggregation for efficient shape extraction. Springer The Visual Computer (TVC). 25(3): 267-278, Feburary 2009. Abstract: This paper presents an efficient framework which supports both automatic and interactive shape extraction from surfaces. Unlike most of the existing hierarchical shape extraction methods, which are based on computationally expensive top-down algorithms, our framework employs a fast bottom-up hierarchical method with multiscale aggregation. We introduce a geometric similarity measure, which operates at multiple scales and guarantees that a hierarchy of high-level features are automatically found through local adaptive aggregation. We also show that the aggregation process allows easy incorporation of user-specified constraints, enabling users to interactively extract features of interest. Both our automatic and the interactive shape extraction methods do not require explicit connectivity information, and thus are applicable to unorganized point sets. Additionally, with the hierarchical feature representation, we design a simple and effective method to perform partial shape matching, allowing efficient search of self-similar features across the entire surface. Experiments show that our methods robustly extract visually meaningful features and are significantly faster than related methods. Paper: [Online First] Xiangye Xiao, Qiong Luo, Dan Hong, Hongbo Fu, Xing Xie, and Wei-Ying Ma. Browsing on small displays by transforming web pages into hierarchically structured sub-pages. ACM Transactions on the Web (TWEB). 2009. Article No 4. Kun Xu, Yuntao Jia, Hongbo Fu, Shimin Hu, and Chiew-Lan Tai. Spherical piecewise constant basis functions for all-frequency precomputed radiance transfer. IEEE Transaction on Visualization and Computer Graphics (TVCG). 14(2): 454-467, March/April, 2008. (IEEE TVCG Featured Article) [citation] Abstract: This paper presents a novel basis function, called spherical piecewise constant basis function (SPCBF), for precomputed radiance transfer. SPCBFs have several desirable properties: rotatability, ability to represent all-frequency signals, and support for efficient multiple product. By partitioning the illumination sphere into a set of subregions, and associating each subregion with an SPCBF valued 1 inside the region and 0 elsewhere, we precompute the light coefficients using the resulting SPCBFs. We run-time approximate BRDF and visibility coefficients with the same set of SPCBFs through fast lookup of summed-area-table (SAT) and visibility distance table (VDT), respectively. SPCBFs enable new effects such as object rotation in all-frequency rendering of dynamic scenes and on-the-fly BRDF editing under rotating environment lighting. With graphics hardware acceleration, our method achieves real-time frame rates. Keywords: spherical piecewise constant basis functions, real-time rendering, precomputed radiance transfer [Paper]; [Video] Chunxia Xiao, Shu Liu, Hongbo Fu, Chengchun Lin, Chengfang Song, Zhiyong Huang, Fazhi He, and Qunsheng Peng. Video completion and synthesis. Journal of Computer Animation and Virtual World (CAVW): Special Issue of Computer Animation & Social Agents (CASA 2008). 19(3-4): 341-353, 2008. Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai. Effective derivation of similarity transformations for implicit Laplacian mesh editing, Computer Graphics Forum (CGF). 26(1): 34-45, March 2007. (a previous version appeared as a technical report) [citation] Abstract: Laplacian coordinates as a local shape descriptor have been employed in mesh editing. As they are encoded in the global coordinate system, they need to be transformed locally to reflect the changed local features of the deformed surface. We present a novel implicit Laplacian editing framework which is linear and effectively captures local rotation information during editing. Directly representing rotation with respect to vertex positions in 3D space leads to a nonlinear system. Instead, we first compute the affine transformations implicitly defined for all the Laplacian coordinates by solving a large sparse linear system, and then extract the rotation and uniform scaling information from each solved affine transformation. Unlike existing differential-based mesh editing techniques, our method produces visually pleasing deformation results under large angle rotations or big-scale translations of handles. Additionally, to demonstrate the advantage of our editing framework, we introduce a new intuitive editing technique, called configuration-independent merging, which produces the same merging result independent of the relative position, orientation, scale of input meshes. Keywords: mesh editing, similarity invariant, Laplacian coordinates, configuration-independent, mesh deformation, mesh merging Project page
 Oscar Kin-Chung Au, Chiew-Lan Tai, and Ligang Liu, Hongbo Fu. Dual Laplacian editing for meshes, IEEE Transaction on Visualization and Computer Graphics (TVCG). 12(3): 386-395, MAY/JUNE 2006. (a previous version appeared as a technical report) [citation] Abstract: Recently, differential information as local intrinsic feature descriptors has been used for mesh editing. Given certain user input as constraints, a deformed mesh is reconstructed by minimizing the changes in the differential information. Since the differential information is encoded in a global coordinate system, it must somehow be transformed to fit the orientations of details in the deformed surface, otherwise distortion will appear. We observe that visually pleasing deformed meshes should preserve both local parameterization and geometry details. We propose to encode these two types of information in the dual mesh domain due to the simplicity of the neighborhood structure of dual mesh vertices. Both sets of information are nondirectional and nonlinearly dependent on the vertex positions. Thus, we present a novel editing framework that iteratively updates both the primal vertex positions and the dual Laplacian coordinates to progressively reduce distortion in parametrization and geometry. Unlike previous related work, our method can produce visually pleasing deformations with simple user interaction, requiring only the handle positions, not local frames at the handles. Keywords: mesh editing, local shape representation, click-and-drag interface, shape preserving, dual Laplacian Project page

## Conference and Exhibition

 Yilan Chen*, Hongbo Fu, and Kin-Chung Au. A multi-level sketch-based interface for decorative pattern exploration.SIGGRAPH Asia 2016 Technical Briefs. Macao, Dec. 5-8, 2016. Abstract: Despite the extensive usage of decorative patterns in art and design, there is a lack of intuitive ways to find a certain type of patterns. In this paper, we present a multi-level sketch-based interface that incorporates low-level geometrical features and high-level structural features, namely reflection, rotation, and translation symmetries, to support decorative pattern exploration at different levels of detail. Four brush tools are designed for users to specify any combination of such features and compose a hybrid search query. The results of a pilot study show that users are able to perform pattern retrieval tasks using our system easily and effectively. Lei Li*, Zhe Huang*, Changqing Zou*, Chiew-Lan Tai, Rynson Lau, Hao Zhang, Ping Tan, and Hongbo Fu. Model-driven sketch reconstruction with structure-oriented retrieval. SIGGRAPH Asia 2016 Technical Briefs. Macao, Dec. 5-8, 2016. Abstract: We propose an interactive system that aims at lifting a 2D sketch into a 3D sketch with the help of existing models in shape collections. The key idea is to exploit part structure for shape retrieval and sketch reconstruction. We adopt sketch-based shape retrieval and develop a novel matching algorithm which considers structure in addition to traditional shape features. From a list of retrieved models, users select one to serve as a 3D proxy, providing abstract 3D information. Then our reconstruction method transforms the sketch into 3D geometry by back-projection, followed by an optimization procedure based on the Laplacian mesh deformation framework. Preliminary evaluations show that our retrieval algorithm is more effective than a state-of-the-art method and users can create interesting 3D forms of sketches without precise drawing skills. [Paper]; [Video] Pui Chung Wong*, Hongbo Fu, and Kening Zhu. Back-Mirror: back-of-device one-handed interaction on smartphones. SIGGRAPH Asia 2016 Symposium on Mobile Graphics and Interactive Applications. Presentation and Demonstrations. Macao, Dec. 5-8, 2016. (Best Demo Honorable Mention) Abstract: We present Back-Mirror, a low-cost camera-based approach for widening the interaction space on the back surface of a smartphone by using mirror reflection. Back-Mirror consists of two main parts: a smartphone accessory with a mirror that can reflect the back surface to the rear-facing camera of the phone, and a computer-vision algorithm for gesture recognition based on the visual pattern on the back surface. Our approach captures the finger position on the back surface, and tracks finger movement with higher resolution than the previous methods. We further designed a set of intuitive gestures that can be recognized by Back-Mirror, including swiping up, down, left and right, tapping left, middle, right, and holding gestures. Furthermore, we created applications of Back-of-device, such as game, media player, photo gallery, and unlock mechanism, allowing users to experience the use of Back-Mirror gestures in the real-life scenarios. Qingkun Su*, Kin-Chung Au, Pengfei Xu, Hongbo Fu, and Chiew-Lan Tai. 2D-Dragger: Unified Touch-based Target Acquisition with Constant Effective Width. Mobile HCI 2016. Florence, September 6-9, 2016. Abstract: In this work we introduce 2D-Dragger, a unified touch-based target acquisition technique that enables easy access to small targets in dense regions or distant targets on screens of various sizes. The effective width of a target is constant with our tool, allowing a fixed scale of finger movement for capturing a new target. Our tool is thus insensitive to the distribution and size of the selectable targets, and consistently works well for screens of different sizes, from mobile to wall-sized screens. Our user studies show that overall 2D-Dragger performs the best compared to the state-of-the-art techniques for selecting both near and distant targets of various sizes in different densities. [Paper, Quoc Huy Phan*, Jingwan Lu, Paul Asente, Antoni B. Chan, and Hongbo Fu. Patternista: Learning element style compatibility and spatial composition for ring-based layout decoration. Expressive 2016. Lisbon, May 7-9, 2016. Abstract: Creating aesthetically pleasing decorations for daily objects is a task that requires deep understanding of multiple aspects of object decoration, including color, composition and element compatibility. A designer needs a unique aesthetic style to create artworks that stand out. Although specific subproblems have been studied before, the overall problem of design recommendation and synthesis is still relatively unexplored. In this paper, we propose a flexible data-driven framework to jointly consider two aspects of this design problem: style compatibility and spatial composition. We introduce a ring-based layout model capable of capturing decorative compositions for objects like plates, vases and pots. Our layout representation allows the use of the hidden Markov models (HMM’s) technique to make intelligent design suggestions for each region of a target object in a sequential fashion. We conducted both quantitative and qualitative experiments to evaluate the framework and obtained favorable results. [Paper] Quoc Huy Phan*, Hongbo Fu, and Antoni B. Chan. Look closely: Learning exemplar patches for recognizing textiles from product images. ACCV 2014. Singapore, Nov 1-5, 2014. Abstract: The resolution of product images is becoming higher dues to the rapid development of digital cameras and the Internet. Higher resolution images expose novel feature relationships that did not exist before. For instance, from a large image of a garment, one can observe the overall shape, the wrinkles, and the micro-level details such as sewing lines and weaving patterns. The key idea of our work is to combine features obtained at such largely different scales to improve textile recognition performance. Specifically, we develop a robust semi-supervised model that exploits both micro textures and macro deformable shapes to select representative patches from product images. The selected patches are then used as inputs to conventional texture recognition methods to perform texture recognition. We show that, by learning from human-provided image regions, the method can suggest more discriminative regions that lead to higher categorization rates (+5-7%). We also show that our patch selection method significantly improves the performance of conventional texture recognition methods that usually rely on dense sampling. Our dataset of labeled textile images will be released for further investigation in this emerging field. [Paper] Chun Kit Tsui*, Chi Hei Law*, and Hongbo Fu. One-man Orchestra: conducting smartphone orchestra. SIGGRAPH Asia 2014, Emerging Technologoeis. Shenzhen, December, 2014. Best Demo Award Abstract: This work presents a new platform for performing one-man orchestra. The conductor is the only human involved, who uses traditional bimanual conducting gestures to interactively direct the performance of smartphones instead of human performers in a real-world orchestra. Each smartphone acts as a virtual performer who plays a certain music instrument like piano and violin. Our work not only allows ordinary people to experience music conducting but also provides a training platform so that students can practice music conducting with a unique listening experience. Project page Jingbo Liu, Hongbo Fu, and Chiew-Lan Tai. Dynamic sketching: simulating the process of observational drawing. CAe '14: Proceedings of the Workshop on Computational Aesthetics. Vancouver, August 2014. Abstract: The creation process of a drawing provides a vivid visual progression, allowing the audience to better comprehend the drawing. It also enables numerous stroke-based rendering techniques. In this work we tackle the problem of simulating the process of observational drawing, that is, how people draw lines when sketching a given 3D model. We present a multi-phase drawing framework and the concept of sketching entropy, which provides a unified way to model stroke selection and ordering, both within and across phases. We demonstrate the proposed ideas for the sketching of organic objects and show a visually plausible simulation of their dynamic sketching process. [Paper]; [Video] Hongbo Fu, Xiaoguang Han*, and Phan Quoc Huy*. Data-driven suggestions for portrait posing. ACM SIGGRAPH Asia 2013, Technical Briefs, Hong Kong, November, 2013. Hongbo Fu, Xiaoguang Han*, and Phan Quoc Huy*. Data-driven suggestions for portrait posing. ACM SIGGRAPH Asia 2013, Emerging Technologies, Hong Kong, November, 2013. Best Demo Award. One of the four program highlights among all the accepted works. Abstract: This work introduces an easy-to-use creativity support tool for portrait posing, which is an important but challenging problem in portrait photography. While it is well known that a collection of sample poses is a source of inspiration, manual browsing is currently the only option to identify a desired pose from a possibly large collection of poses. With our tool, a photographer is able to easily retrieve desired reference poses as guidance or stimulate creativity. We show how our data-driven suggestions can be used to either refine the current pose of a subject or explore new poses. Our pilot study indicates that unskilled photographers find our data-driven suggestions easy to use and useful, though the role of our suggestions in improving aesthetic quality or pose diversity still needs more investigation. Our work takes the first step of using consumer-level depth sensors towards more intelligent cameras for computational photography. Wing Ho Andy Li*and Hongbo Fu. BezelCursor: Bezel-initiated cursor for one-handed target acquisition on mobile touch screens. SIGGRAPH Asia 2013, Symposium on Mobile Graphics and Interactive Applications (Demonstrations). Hong Kong, November, 2013. Abstract: We present BezelCursor, a novel one-handed thumb interaction technique for target acquisition on mobile touch screens of various sizes. Our technique combines bezel-initiated interaction and gestural pointing to solve the problem of limited screen accessibility afforded by the thumb. With a fixed, comfortable grip of a mobile touch device, a user may employ our tool to easily and quickly access a target located anywhere on the screen, using a single fluid action. Unlike the existing technologies, our technique requires no explicit mode switching to invoke and can be smoothly used together with commonly adopted interaction styles such as direct touch and dragging. A user study shows that the performance of our technique is comparable to or even better than that of the state-of-the-art techniques, which, however, suffer from various problems such as explicit mode switching, finger occlusion and/or limited accessibility.. Project page Lu Chen, Hongbo Fu, Wing Ho Andy Li*, and Chiew-Lan Tai. Scalable maps of random dots for middle-scale locative games. IEEE Virtual Reality 2013, Orlando, Florida, USA, March, 2013. Abstract: In this work we present a new scalable map for middle-scale locative games. Our map is built upon the recent development of fiducial markers, specifically, the random dot markers. We propose a simple solution, i.e., using a grid of compound markers, to address the scalability problem. Our highly scalable approach is able to generate a middle-scale map on which multiple players can stand and position themselves via mobile cameras in real time. We show how a classic computer game can be effectively adapted to our middle-scale gaming platform. Wing Ho Andy Li*and Hongbo Fu. Augmented reflection of reality. SIGGRAPH 2012 Emerging Techologies, Los Angeles, USA, August, 2012. Abstract: Unlike existing augmented-reality techniques, which typically augment the real world surrounding a user with virtual objects and visualize those effects using various see-through displays, this system focuses on augmenting the user's full body. A half-silvered mirror combines the user's reflection with synthetic data to provide a mixed world. With a live and direct view of the user and the surrounding environment, the system allows the user to intuitively control virtual objects (for example, virtual drums) via the augmented reflection. Bin Bao*and Hongbo Fu. Vectorizing line drawings with near-constant line width. IEEE Internationl Conference on Image Processing (ICIP 2012), Orlando, Florida, USA, September-October, 2012. Abstract: Many line drawing images are composed of lines with near-constant width. Such line width information has seldom been used in the vectorization process. In this work, we show that by enforcing the nearconstant line width constraint, we are able to produce visually more pleasing vectorization results. To this end, we develop a tracingbased approach, allowing dynamic validation of the line width constraint. The key here is to derive correct tracing directions, which are determined based on an automatically estimated orientation field, shape smoothness and the near-constant line width assumption. We have examined our algorithm on a variety of line drawing images with different shape and topology complexity. We show that our solution outperforms the state-of-the-art vectorization software systems including WinTopo and Adobe Illustrator, especially at regions where multiple lines meet and thus are difficult to locally distinguish from each other. Wei-Lwun Lu, James J. Little, Alla Sheffer, and Hongbo Fu. Deforestation: Extracting 3D bare-earth surface from airborne LiDAR data. The Fifth Canadian Conference on Computer and Robot Vision (CRV 2008), pages 203-210, Windsor, Canada, May 2008. Abstract: Bare-earth identification selects points from a LiDAR point cloud so that they can be interpolated to form a representation of the ground surface from which structures, vegetation, and other cover have been removed. We triangulate the point cloud and segment the triangles into flat and steep triangles using a Discriminative Random Field (DRF) that uses a data-dependent label smoothness term.Regions are classified into ground and non-ground based on steepness in the regions and ground points are selected as points on ground triangles. Various post-processing steps are used to further identify flat regions as rooftops and treetops, and eliminate isolated features that affect the surface interpolation.The performance of our algorithm is evaluated in its effectiveness at labeling ground points and, more importantly, at determining the extracted bare-earth surface. Extensive comparison shows the effectiveness of the strategy at selecting ground points leading to good fit in the triangulated mesh derived from the ground points. [Paper] Hongbo Fu, Yichen Wei, Chiew-Lan Tai, and Long Quan. Sketching hairstyles, EUROGRAPHICS Workshop on Sketch-Based Interfaces and Modeling (SBIM 2007), pages 31-36, UC Riverside, USA, July 2008. [citation] Abstract: This paper presents an intuitive sketching interface for interactive hairstyle design, made possible by an efficient numerical updating scheme. The user portrays the global shape of a desired hairstyle through a few 3D style curves that are manipulated by interactively sketching freeform strokes. Our approach is based on a vector field representation that solves a sparse linear system with the style curves acting as boundary constraints. The key observation is that the specific sparseness pattern of the linear system enables an efficient incremental numerical updating scheme. This gives rise to a sketching interface that provides interactive visual feedback to the user. Interesting hairstyles can be easily created in minutes. Keywords: vector field editing, Cholesky modification, hairstyle sketching [Paper] Xiaohuang Huang, Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai. Optimal boundaries for Poisson mesh merging, ACM Solid and Physical Modeling Symposium 2007 (SPM 2007), pages 35-40, Beijing, China, June 2007. (Acceptance rate: 26.6%) [citation] Abstract: Existing Poisson mesh editing techniques mainly focus on designing schemes to propagate deformation from a given boundary condition to a region of interest. Although solving the Poisson system in the least-squares sense distributes the distortion errors over the entire region of interest, large deformation in the boundary condition might still lead to severely distorted results. We propose to optimize the boundary condition (the merging boundary) for Poisson mesh merging. The user needs only to casually mark a source region and a target region. Our algorithm automatically searches for an optimal boundary condition within the marked regions such that the change of the found boundary during merging is minimal in terms of similarity transformation. Experimental results demonstrate that our merging tool is easy to use and produces visually better merging results than unoptimized techniques. Keywords: mesh merging, Poisson mesh editing, optimal boundaries [Paper] Xiangye Xiao, Qiong Luo, Dan Hong, and Hongbo Fu. Slicing*-tree based web page transformation for small displays. ACM Fourteenth Conference on Information and Knowledge Management (CIKM 2005), Bremen, Germany, 2005. (Journal version appears in ACM Transactions on the Web) [citation] Hongbo Fu, Chiew-Lan Tai, and Oscar Kin-Chung Au. Morphing with Laplacian coordinates and spatial-temporal texture, In Proceedings of Pacific Graphics 2005 (PG 2005), pages 100-102, Macao, China, October 2005. (Acceptance rate: 35.5%) [citation] Abstract: Given 2D or 3D shapes, the objective of morphing is to create a sequence of gradually changed shapes and to keep individual shapes as visually pleasing as possible. In this paper, we present a morphing technique for 2D planar curves (open or closed) by coherently interpolating the source and target Laplacian coordinates. Although the Laplacian coordinates capture the geometric features of a shape, they are not rotation-invariant. By applying as-rigid-as-possible transformations with rotation coherence constraints to the Laplacian coordinates, we make the intermediate morphing shapes highly appealing. Our method successfully avoids local self-intersections. We also propose to interpolate the textures within simple closed curves using a spatial-temporal structure. In existing texture morphing techniques, textures are encoded by either skeleton structures or triangulations. Therefore, the morphing results depend on the quality of these skeleton structures or triangulations. Given two simple closed curves and their interpolated shapes, our method automatically finds a one-to-one mapping between the source and target textures without any skeleton or triangulation and guarantees that neighboring pixels morph coherently. Keywords: Laplacian coordinates, spatial-temporal texture, shape morphing, as-rigid-as-possible [Paper]
 Oscar Kin-Chung Au, Chiew-Lan Tai, Hongbo Fu, and Ligang Liu. Mesh editing with curvature flow Laplacian, Symposium on Geometry Processing 2005 (SGP 2005), Vienna, Austria, July, 2005 (Poster). [citation] Introduction: Differential coordinates are essentially vectors encoded in the global coordinate system. Since the local features on a mesh are deformed and rotated during editing, the differential coordinates must somehow be transformed to match the desired new orientations, otherwise distortion like shearing and stretching will occur. This transformation problem is basically a chicken-and-egg problem: the reconstruction of the deformed surface requires properly oriented differential coordinates, while the reorientation of these coordinates depend on the unknown deformed mesh. We present an iterative Laplacian-based editing framework to solve this transformation problem. The only user input required are the positions of the handles, not their local frames. Thus our system supports simple point handle editing. Our iterative updating process finds the best orientations of local features, including the orientations at the point handles. [Paper]; [Poster]
 Hongbo Fu, Chiew-Lan Tai, and Hongxin Zhang. Topology-free cut-and-paste editing over meshes, Geometric Modeling and Processing 2004 (GMP 2004), pages 173 – 182, Beijing, China, April, 2004. (Acceptance rate: 23.3%) [citation] Abstract: Existing cut-and-paste editing methods over meshes are inapplicable to regions with non-zero genus. To overcome this drawback, we propose a novel method in this paper. Firstly, a base surface passing through the boundary vertices of the selected region is constructed using the boundary triangulation technique. Considering the connectivity between the neighboring vertices, a new detail encoding technique is then presented based on surface parameterization. Finally, the detail representation is transferred onto the target surface via the base surface. This strategy of creating a base surface as a detail carrier allows us to paste features of non-zero genus onto the target surface. By taking the physical relationship of adjacent vertices into account, our detail encoding method produces more natural and less distorted results. Therefore, our elegant method not only can eliminate the dependence on the topology of the selected feature, but also reduces the distortion effectively during pasting. Keywords: topology-free, cut-and-paste, mesh editing [Paper]

## Book & Thesis

Hongbo Fu. Advanced programming in Delphi 6.0, Publishing House of Electronics Industry, March 2002, ISBN 7-900084-62-2 (in Chinese). Buy this book at dearbook.
Brief introduction: This book presents the essence of Delphi programming through a variety of advanced examples. The examples focus on the development of multimedia and Internet applications, for example, OpenGL, Indy components, XML, Web Broker and WebSnap techniques.

Hongbo Fu. Differential methods for intuitive 3D shape modeling, Ph.D. Thesis, 20 July 2007.
 Thesis Committee Thesis (PDF: 5.7M)
Hongbo Fu. Magnetocardiography signal denoising techniques. Undergraduate Thesis, July 2002.

## Technical Report

 Wei-Lwun Lu, Kevin. P. Murphy, James J. Little, Alla Sheffer, and Hongbo Fu.Coupled CRFs for estimating the underlying ground surface from airborne LiDAR data, UBC CS TR-2008-05, May 2008. Hongbo Fu. Differential methods for intuitive 3D shape modeling, PhD Thesis Proposal, 21 May 2007. Oscar Kin-Chung Au, Chiew-Lan Tai, Hongbo Fu, and Ligang Liu. Mesh editing with curvature flow Laplacian operator, Technical report, HKUST-CS05-10, July 2005. [citation] Abstract: Recently, differential information as local intrinsic feature descriptors has been used for mesh editing. Given certain user input as constraints, a deformed mesh is reconstructed by minimizing the changes in the differential information. Since the differential information is encoded in the global coordinate system, it must somehow be transformed to fit the orientation of details in the deformed surface, otherwise distortion will appear. We observe that visually desired deformed meshes should preserve both local parameterization and geometry details. To find suitable representations for these two types of information, we exploit certain properties of the curvature flow Laplacian operator. Specifically, we consider the coefficients of Laplacian operator as the parametrization information and the magnitudes of the Laplacian coordinates as the geometry information. Both sets of information are non-directional and non-linearly dependent on the vertex positions. Thus, we propose a new editing framework that iteratively updates both the vertex positions and the Laplacian coordinates to reduce distortion in parametrization and geometry. Our method can produce visually pleasing deformation with simple user interaction, requiring only the handle positions, not the local frames at the handles. In addition, since the magnitudes of the Laplacian coordinates approximate the integrated mean curvatures, our framework is useful for modifying mesh geometry via updating the curvature field. We demonstrate this use in spherical parameterization and non-shrinking smoothing.
 Hongbo Fu, Chiew-Lan Tai. Mesh editing with affine-invariant Laplacian coordinates, Technical report, HKUST-CS05-01, January 2005. Abstract: Differential coordinates as an intrinsic surface representation capture geometric details of surface. However, differential coordinates alone cannot achieve desirable editing results, because they are not affine invariant. In this paper, we present a novel method that makes the Laplacian coordinates completely affine-invariant during editing. For each vertex of a surface to be edited, we compute the Laplacian coordinate and implicitly define a local affine transformation that is dependent on the unknown edited vertices. During editing, both the resulting surface and the implicit local affine transformations are solved simultaneously through a constrained optimization. The underlying mathematics of our method is a set of linear Partial Differential Equations (PDEs) with a generalized boundary condition. The main computation involved comes from factorizing the resulting sparse system of linear equations, which is performed only once. After that, back substitutions are executed to interactively respond to user manipulations. We propose a new editing technique, called pose-independent merging, to demonstrate the advantages of the affine-invariant Laplacian coordinates. In the the same framework, large-scale mesh deformation and pose-dependent mesh merging are also presented. Hongbo Fu. A survey of editing techniques on surface models and point-based models, PhD Qualifying Examination, 19 December 2003.

My co-authors (by alphabetical order)

 Hong Kong Oscar Kin-Chung Au (CityU), Long Quan (HKUST), Chiew-Lan Tai (HKUST) Mainland China Shimin Hu (Tsinghua), Ligang Liu (ZJU), Qunsheng Peng (ZJU), Yichen Wei (MSRA)(WHU) (Tsinghua)(ZJU) Taiwan Yu-Shuen Wang (NCKU), Tong-Yee Lee (NCKU) Canada (UBC), (UBC), (UBC), Vladislav Kraevoy (UBC)(UBC), Kevin Murphy (UBC), (UBC) (UBC) Germany Hans-Peter Seidel (MPII) Israel Daniel Cohen-Or (Tel Aviv) (Academic College of Tel-Aviv-Yaffo) United States Yuntao Jia (UIUC), Olga Sorkine (NYU)