## SIGGRAPH (11), SIGCHI (1), UIST (1)

Journal

 Quoc Huy Phan*, Hongbo Fu, and Antoni Chan. Color Orchestra: Ordering Color Palettes for Interpolation and Prediction. IEEE Transaction on Visualization and Computer Graphics (TVCG). Accepted for publication. Abstract: Color theme or color palette can deeply influence the quality and the feeling of a photograph or a graphical design. Although color palettes may come from different sources such as online crowd-sourcing, photographs and graphical designs, in this paper, we consider color palettes extracted from fine art collections, which we believe to be an abundant source of stylistic and unique color themes. We aim to capture color styles embedded in these collections by means of statistical models and to build practical applications upon these models. As artists often use their personal color themes in their paintings, making these palettes appear frequently in the dataset, we employed density estimation to capture the characteristics of palette data. Via density estimation, we carried out various predictions and interpolations on palettes, which led to promising applications such as photo-style exploration, real-time color suggestion, and enriched photo recolorization. It was, however, challenging to apply density estimation to palette data as palettes often come as unordered sets of colors, which make it difficult to use conventional metrics on them. To this end, we developed a divide-and-conquer sorting algorithm to rearrange the colors in the palettes in a coherent order, which allows meaningful interpolation between color palettes. To confirm the performance of our model, we also conducted quantitative experiments on datasets of digitized paintings collected from the Internet and received favorable results. Sheng Yang, Jie Xu, Kang Chen, and Hongbo Fu. View suggestion for interactive segmentation of indoor scenes. Comutational Visual Media. 3: 131. 2017 Abstract: Point cloud segmentation is a fundamental problem. Due to the complexity of real-world scenes and the limitations of 3D scanners, interactive segmentation is currently the only way to cope with all kinds of point clouds. However, interactively segmenting complex and large-scale scenes is very time-consuming. In this paper, we present a novel interactive system for segmenting point cloud scenes. Our system automatically suggests a series of camera views, in which users can conveniently specify segmentation guidance. In this way, users may focus on specifying segmentation hints instead of manually searching for desirable views of unsegmented objects, thus significantly reducing user effort. To achieve this, we introduce a novel view preference model, which is based on a set of dedicated view attributes, with weights learned from a user study. We also introduce support relations for both graph-cut-based segmentation and finding similar objects. Our experiments show that our segmentation technique helps users quickly segment various types of scenes, outperforming alternative methods. [Paper, Video] Shi-Sheng Huang, Hongbo Fu, Lin-Yu Wei, and Shi-Min Hu. Support Substructures: Support-induced part-level structural representation. IEEE Transaction on Visualization and Computer Graphics (TVCG). 22(8): 2024-36. Aug 2016 Abstract: In this work we explore a support-induced structural organization of object parts. We introduce the concept of support substructures, which are special subsets of object parts with support and stability. A bottom-up approach is proposed to identify such substructures in a support relation graph. We apply the derived high-level substructures to part-based shape reshuffling between models, resulting in nontrivial functionally plausible model variations that are difficult to achieve with symmetry-induced substructures by the state of the art. We also show how to automatically or interactively turn a single input model to new functionally plausible shapes by structure rearrangement and synthesis, enabled by support substructures. To the best of our knowledge no single existing method has been designed for all these applications. [Paper, Video] Qiang Fu, Xiaowu Chen, Xiaoyu Su, Jia Li, and Hongbo Fu. Structure-adaptive Shape Editing for Man-made Objects. Computer Graphics Forum (Proceedings of Eurographics 2016). 35(2): 27-36. May 9-13, 2016. Abstract: One of the challenging problems for shape editing is to adapt shapes with diversified structures for various editing needs. In this paper we introduce a shape editing approach that automatically adapts the structure of a shape being edited with respect to user inputs. Given a category of shapes, our approach first classifies them into groups based on the constituent parts. The group-sensitive priors, including both inter-group and intra-group priors, are then learned through statistical structure analysis and multivariate regression. By using these priors, the inherent characteristics and typical variations of shape structures can be well captured. Based on such group-sensitive priors, we propose a framework for real-time shape editing, which adapts the structure of shape to continuous user editing operations. Experimental results show that the proposed approach is capable of both structure-preserving and structure-varying shape editing. [Paper, Video] Wing Ho Andy Li*, Kening Zhu, and Hongbo Fu. Exploring the design space of bezel-initiated gestures for mobile interaction. International Journal of Mobile Human Computer Interaction. Volume 9 Issue 1, Jan. 2017. Abstract: Bezel enables useful gestures supplementary to primary surface gestures for mobile interaction. However, the existing works mainly focus on researcher-designed gestures, which utilized only a subset of the design space. In order to explore the design space, we present a modified elicitation study, during which the participants designed bezel-initiated gestures for four sets of tasks. Different from traditional elicitation studies, ours encourages participants to design new gestures. We do not focus on individual tasks or gestures, but perform a detailed analysis of the collected gestures as a whole, and provide findings which could benefit designers of bezel-initiated gestures. [Paper, Video] Shi-Sheng Huang, Hongbo Fu, and Shi-Ming Hu. Structure guided interior scene synthesis via graph matching. Graphical Models. Volume 85, Pages 46-55, May 2016. Abstract: We present a method for reshuffle-based 3D interior scene synthesis guided by scene structures. Given several 3D scenes, we form each 3D scene as a structure graph associated with a relationship set. Considering both the object similarity and relation similarity, we then establish a furniture-object-based matching between scene pairs via graph matching. Such a matching allows us to merge the structure graphs into a unified structure, i.e., Augmented Graph (AG). Guided by the AG , we perform scene synthesis by reshuffling objects through three simple operations, i.e., replacing, growing and transfer. A synthesis compatibility measure considering the environment of the furniture objects is also introduced to filter out poor-quality results. We show that our method is able to generate high-quality scene variations and out- performs the state of the art. Paper Qiang Fu, Xiaowu Chen, Xiaoyu Su, and Hongbo Fu. Natural lines inspired 3D shape re-design. Graphical Models. Volume 85, Pages 1-10, May 2016. Abstract: We introduce an approach for re-designing 3D shapes inspired by natural lines such as the contours and skeletons extracted from the natural objects in images. Designing an artistically creative and visually pleasing model is not easy for novice users. In this paper, we propose to convert such a design task to a computational procedure. Given a 3D object, we first compare its editable lines with various lines extracted from the image database to explore the candidate reference lines. Then a parametric deformation method is employed to reshape the 3D object guided by the reference lines. We show that our approach enables users to quickly create non-trivial and interesting re-designed 3D objects. We also conduct a user study to validate the usability and effectiveness of our approach. Paper Wing Ho Andy Li*, Hongbo Fu, and Kening Zhu. BezelCursor: Bezel-initiated cursor for one-handed target acquisition on mobile touch screens. International Journal of Mobile Human Computer Interaction. Volume 8, Issue 1, Jan-March 2016. Abstract: We present BezelCursor, a novel one-handed thumb interaction technique for target acquisition on mobile touch screens of various sizes. Our technique combines bezel-initiated interaction and pointing gesture to solve the problem of limited screen accessibility afforded by the thumb. With a fixed, comfortable grip of a mobile touch device, a user may employ our tool to easily and quickly access a target located anywhere on the screen, using a single fluid action. Unlike the existing technologies, our technique requires no explicit mode switching to invoke and can be smoothly used together with commonly adopted interaction styles such as direct touch and dragging. Our user study shows that BezelCursor requires less grip adjustment, and is more accurate or faster than the state-of-the-art techniques when using a fixed secure grip. Project page Quoc Huy Phan*, Hongbo Fu, and Antoni Chan. FlexyFont: Learning transferring rules for flexible typeface synthesis. Computer Graphics Forum (Proceedings of Pacific Graphics 2015). 34(7): 245-256. Oct. 2015 Abstract: Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval-based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real-world design. [Paper, Video] Xiaoyu Su, Xiaowu Chen, Qiang Fu, and Hongbo Fu. Cross-class 3D object synthesis guided by reference examples. Computers & Graphics (Special Issue on CAD/Graphics 2015). 54: 145-153. Feb. 2016. Best Paper Award Abstract: Re-combining parts of existing 3D object models is an interesting and efficient technique to create novel shape collections. However, due to the lack of direct parts’ correspondence across different shape families, such data-driven modeling approaches in literature are limited to the synthesis of in-class shapes only. To address the problem, this paper proposes a novel approach to create 3D shapes via re-combination of cross-category object parts from an existing database of different model families. In our approach, a reference shape containing multi-functional constituent parts is pre-specified by users, and its design style is then reused to guide the creation process. To this end, the functional substructures are first extracted for the reference shape. After that, we explore a series of category pairs which are potential replacements for the functional substructures of the reference shape to make interesting variations. We demonstrate our ideas using various examples, and present a user study to evaluate the usability and efficiency of our technique. [Paper] Changqing Zou*, Shifeng Chen, Hongbo Fu, and Jianzhuang Liu. Progressive 3D reconstruction of planar-faced manifold objects with DRF-based line drawing decomposition. IEEE Transaction on Visualization and Computer Graphics (TVCG). 21(2): 252-263. Feb. 2015. Abstract: This paper presents an approach for reconstructing polyhedral objects from single-view line drawings. Our approach separates a complex line drawing representing a manifold object into a series of simpler line drawings, based on the degree of reconstruction freedom (DRF). We then progressively reconstruct a complete 3D model from these simpler line drawings. Our experiments show that our decomposition algorithm is able to handle complex drawings which are challenging for the state of the art. The advantages of the presented progressive 3D reconstruction method over the existing reconstruction methods in terms of both robustness and efficiency are also demonstrated. [Paper] Changqing Zou*, Xiaojiang Peng, Hao Lv, Shifeng Chen, Hongbo Fu, and Jianzhuang Liu. Sketch-based 3-D modeling for piecewise planar objects in single images. Computers & Graphics (Special Issue of SMI 2014). 46(2015): 130-137. Feb. 2015. Abstract: 3-D object modeling from single images has many applications in computer graphics and multimedia. Most previous 3-D modeling methods which directly recover 3-D geometry from single images require user interactions during the whole modeling process. In this paper, we propose a semi-automatic 3-D modeling approach to recover accurate 3-D geometry from a singe image of a piecewise planar object with less user interaction. Our approach concentrates on these three aspects: 1) requiring rough sketch input only, 2) accurate modeling for a large class of objects, and 3) automatically recovering the hidden part of an object and providing a complete 3-D model. Experimental results on various objects show that the proposed approach provides a good solution to these three problems. [Paper] Zhe Huang, Jiang Wang, Hongbo Fu, and Rynson Lau. Structured mechanical collage. IEEE Transaction on Visualization and Computer Graphics (TVCG). 20(7): 1076-1082, July 2014 Abstract: We present a method to build 3D structured mechanical collages consisting of numerous elements from the database given artist-designed proxy models. The construction is guided by some graphic design principles, namely unity, variety and contrast. Our results are visually more pleasing than previous works as confirmed by a user study. [Paper]; [Video]; [Suppl]; [More results] Xiaoguang Han*, Hongbo Fu, Hanlin Zheng*, Ligang Liu, and Jue Wang. A video-based interface for hand-driven stop motion animation production. 33(6): 70-81. 2013. Abstract: Stop motion is a well-established animation technique, but its production is often laborious and requires craft skills. We present a new video-based interface which is capable of animating the vast majority of everyday objects in stop motion style in a more flexible and intuitive way. It allows animators to perform and capture motions continuously instead of breaking them into small increments and shooting one still picture per increment. More importantly, it permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The key component of our system is a two-phase keyframe-based capturing and processing workflow, assisted by computer vision techniques. We demonstrate that our system is efficient even for amateur animators to generate high quality stop motion animations of a wide variety of objects. Project page Bin Liao, Chunxia Xiao, Liqiang Jin, and Hongbo Fu. Efficient feature-preserving local projection operator for geometry reconstruction. . 45(5): 861-874. Abstract: This paper proposes an efficient and Feature-preserving Locally Optimal Projection operator (FLOP) for geometry reconstruction. Our operator is bilateral weighted, taking both spatial and geometric feature information into consideration for feature-preserving approximation. We then present an accelerated FLOP operator based on the random sampling of the Kernel Density Estimate (KDE), which produces reconstruction results close to those generated using the complete point set data, to within a given accuracy. Additionally, we extend our approach to time-varying data reconstruction, called Spatial-Temporal Locally Optimal Projection operator (STLOP), which efficiently generates temporally coherent and stable features-preserving results. The experimental results show that the proposed algorithms are efficient and robust for feature-preserving geometry reconstruction on both static models and time-varying data sets. [Paper] Jingbo Liu, Oscar Kin-Chung Au, Hongbo Fu, and Chiew-Lan Tai. Two-finger gestures for 6DOF manipulation of 3D objects. Computer Graphics Forum (CGF): special issue of Pacific Graphics 2012. 31(7): 2047-2055. (Acceptance rate: 19.6%) Abstract: Multitouch input devices afford effective solutions for 6DOF (six Degrees of Freedom) manipulation of 3D objects. Mainly focusing on large-size multitouch screens, existing solutions typically require at least three fingers and bimanual interaction for full 6DOF manipulation. However, single-hand, two-finger operations are preferred especially for portable multitouch devices (e.g., popular smartphones) to cause less hand occlusion and relieve the other hand for necessary tasks like holding the devices. Our key idea for full 6DOF control using only two contact fingers is to introduce two manipulation modes and two corresponding gestures by examining the moving characteristics of the two fingers, instead of the number of fingers or the directness of individual fingers as done in previous works. We solve the resulting binary classification problem using a learning-based approach. Our pilot experiment shows that with only two contact fingers and typically unimanual interaction, our technique is comparable to or even better than the state-of-the-art techniques. Project page Oscar Kin-Chung Au, Chiew-Lan Tai, and Hongbo Fu. Multitouch gestures for constrained transformation of 3D objects. Computer Graphics Forum (CGF): special issue of Eurographics 2012. 31(2): 651-660. (Acceptance rate: 25%) Abstract: 3D transformation widgets allow constrained manipulations of 3D objects and are commonly used in many 3D applications for fine-grained manipulations. Since traditional transformation widgets have been mainly designed for mouse-based systems, they are not user friendly for multitouch screens. There is little research on how to use the extra input bandwidth of multitouch screens to ease constrained transformation of 3D objects. This paper presents a small set of multitouch gestures which offers a seamless control of manipulation constraints (i.e., axis or plane) and modes (i.e., translation, rotation or scaling). Our technique does not require any complex manipulation widgets but candidate axes, which are for visualization rather than direct manipulation. Such design not only minimizes visual clutter but also tolerates imprecise touch-based inputs. To further expand our axis-based interaction vocabulary, we introduce intuitive touch gestures for relative manipulations, including snapping and borrowing axes of another object. A user study shows that our technique is more effective than a direct adaption of standard transformation widgets to the tactile paradigm. Project page Lei Zhang, Hua Huang, and Hongbo Fu. EXCOL: an EXtract-and-COmplete Layering approach to cartoon animation reusing. 18(7): 1156-1169. 2012. Abstract: We introduce the EXCOL method (EXtract-and-COmplete Layering) — a novel cartoon animation processing technique to convert a traditional animated cartoon video into multiple semantically meaningful layers. Our technique is inspired by vision-based layering techniques but focuses on shape cues in both the extraction and completion steps to reflect the unique characteristics of cartoon animation. For layer extraction, we define a novel similarity measure incorporating both shape and color of automatically segmented regions within individual frames and propagate a small set of user-specified layer labels among similar regions across frames. By clustering regions with the same labels, each frame is appropriately partitioned into different layers, with each layer containing semantically meaningful content. Then a warping-based approach is used to fill missing parts caused by occlusion within the extracted layers to achieve a complete representation. EXCOL provides a flexible way to effectively reuse traditional cartoon animations with only a small amount of user interaction. It is demonstrated that our EXCOL method is effective and robust, and the layered representation benefits a variety of applications in cartoon animation processing. [Paper] Youyi Zheng, Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai. Bilateral normal filtering for mesh denoising. 17(10): 1521-1530. 2011. Abstract: Decoupling local geometric features from the spatial location of a mesh is crucial for feature-preserving mesh denoising. This paper focuses on first-order features, i.e., facet normals, and presents a simple yet effective anisotropic mesh denoising framework via normal field denoising. Unlike previous denoising methods based on normal filtering, which process normals defined on the Gauss sphere, our method considers normals as a surface signal defined over the original mesh. This allows the design of a novel bilateral normal filter that depends on both spatial distance and signal distance. Our bilateral filter is a more natural extension of the elegant bilateral filter for image denoising than those used in previous bilateral mesh denoising methods. Besides applying this bilateral normal filter in a local, iterative scheme, as common in most of previous works, we present for the first time a global, non-iterative scheme for anisotropic denoising. We show that the former scheme is faster and more effective for denoising extremely noisy meshes while the latter scheme is more robust to irregular surface sampling. We demonstrate that both our feature-preserving schemes generally produce visually and numerically better denoising results than previous methods, especially at challenging regions with sharp features or irregular sampling. [Paper] Youyi Zheng, Hongbo Fu, Daniel Cohen-Or, Oscar Kin-Chung Au, and Chiew-Lan Tai. Component-wise controllers for structure-preserving shape manipulation. Computer Graphics Forum (CGF): special issue of Eurographics 2011. 30(2): 563-572. (Acceptance rate: 17.4%) Abstract: Recent shape editing techniques, especially for man-made models, have gradually shifted focus from maintaining local, low-level geometric features to preserving structural, high-level characteristics like symmetry and parallelism. Such new editing goals typically require a pre-processing shape analysis step to enable subsequent shape editing. Observing that most editing of shapes involves manipulating their constituent components, we introduce component-wise controllers that are adapted to the component characteristics inferred by shape analysis. The controllers capture the natural degrees of freedom of individual components and thus provide an intuitive user interface for editing. A typical model often results in a moderate number of controllers, allowing easy establishment of semantic relations among them by automatic shape analysis supplemented with user interaction. We propose a component-wise propagation algorithm to automatically preserve the established inter-relations while maintaining the defining characteristics of individual controllers and respecting the user-specified modeling constraints. We extend these ideas to a hierarchical setup, allowing the user to adjust the tool complexity with respect to the desired modeling complexity. We demonstrate the effectiveness of our technique on a wide range of engineering models with structural features, often containing multiple connected pieces. Oscar Kin-Chung Au, Chiew-Lan Tai, Daniel Cohen-Or, Youyi Zheng, and Hongbo Fu. Electors voting for fast automatic shape correspondence. Computer Graphics Forum (CGF): special issue of Eurographics 2010. 29(2): 645-654. (Acceptance rate: 20%) Abstract: This paper challenges the difficult problem of automatic semantic correspondence between two given shapes which are semantically similar but possibly geometrically very different (e.g., a dog and an elephant). We argue that the challenging part is the establishment of a sparse correspondence and show that it can be efficiently solved by considering the underlying skeletons augmented with intrinsic surface information. To avoid potentially costly direct search for the best combinatorial match between two sets of skeletal feature nodes, we introduce a statistical correspondence algorithm based on a novel voting scheme, which we call electors voting. The electors are a rather large set of correspondences which then vote to synthesize the final correspondence. The electors are selected via a combinatorial search with pruning tests designed to quickly filter out a vast majority of bad correspondence. This voting scheme is both efficient and insensitive to parameter and threshold settings. The effectiveness of the method is validated by precision-recall statistics with respect to manually defined ground truth. We show that high quality correspondences can be instantaneously established for a wide variety of model pairs, which may have different poses, surface details, and only partial semantic correspondence. Project page: [Paper] Wei-Lwun Lu, Kevin P. Murphy, James J. Little, Alla Sheffer, and Hongbo Fu. A hybrid Conditional Random Field for estimating the underlying ground surface from airborne LiDAR data. IEEE Transactions on Geoscience and Remote Sensing(TGARS). 47(8): 2913-2922. 2009. Abstract: Airborne laser scanners (LiDAR) return point clouds of millions of points imaging large regions. It is very challenging to recover the bare earth, i.e., the surface remaining after the buildings and vegetative cover have been identified and removed; manual correction of the recovered surface is very costly. Our solution combines classification into ground and non-ground with reconstruction of the continuous underlying surface. We define a joint model on the class labels and estimated surface, $p(\vc,\vz|\vx)$, where $c_i \in \{0,1\}$ is the label of point $i$ (ground or non-ground), $z_i$ is the estimated bare-earth surface at point $i$, and $x_i$ is the observed height of point $i$. We learn the parameters of this CRF using supervised learning. The graph structure is obtained by triangulating the point clouds. Given the model, we compute a MAP estimate of the surface, $\arg \max p(\vz|\vx)$, using the EM algorithm, treating the labels $\vc$ as missing data. Extensive testing shows that the recovered surfaces agree very well with those reconstructed from manually corrected data. Moreover, the resulting classification of points is competitive with the best in the literature. Tiberiu Popa, Qingnan Zhou, Derek Bradley, Vladislav Kraevoy, Hongbo Fu, Alla Sheffer, and Wolfgang Heidrich. Wrinkling captured garments using space-time data-driven deformation. Computer Graphics Forum (CGF): special issue of Eurographics 2009. 28(2): 427--435. (Acceptance rate: 23%) Abstract: The presence of characteristic fine folds is important for modeling realistic looking virtual garments. While recent garment capture techniques are quite successful at capturing the low-frequency garment shape and motion over time, they often fail to capture the numerous high-frequency folds, reducing the realism of the reconstructed spacetime models. In our work we propose a method for reintroducing fine folds into the captured models using datadriven dynamic wrinkling. We first estimate the shape and position of folds based on the original video footage used for capture and then wrinkle the surface based on those estimates using space-time deformation. Both steps utilize the unique geometric characteristics of garments in general, and garment folds specifically, to facilitate the modeling of believable folds. We demonstrate the effectiveness of our wrinkling method on a variety of garments that have been captured using several recent techniques. page: [Paper]; [Video] Chunxia Xiao, Hongbo Fu, and Chiew-Lan Tai. Hierarchical aggregation for efficient shape extraction. Springer The Visual Computer (TVC). 25(3): 267-278, Feburary 2009. Abstract: This paper presents an efficient framework which supports both automatic and interactive shape extraction from surfaces. Unlike most of the existing hierarchical shape extraction methods, which are based on computationally expensive top-down algorithms, our framework employs a fast bottom-up hierarchical method with multiscale aggregation. We introduce a geometric similarity measure, which operates at multiple scales and guarantees that a hierarchy of high-level features are automatically found through local adaptive aggregation. We also show that the aggregation process allows easy incorporation of user-specified constraints, enabling users to interactively extract features of interest. Both our automatic and the interactive shape extraction methods do not require explicit connectivity information, and thus are applicable to unorganized point sets. Additionally, with the hierarchical feature representation, we design a simple and effective method to perform partial shape matching, allowing efficient search of self-similar features across the entire surface. Experiments show that our methods robustly extract visually meaningful features and are significantly faster than related methods. Paper: [Online First] Xiangye Xiao, Qiong Luo, Dan Hong, Hongbo Fu, Xing Xie, and Wei-Ying Ma. Browsing on small displays by transforming web pages into hierarchically structured sub-pages. ACM Transactions on the Web (TWEB). 2009. Article No 4. Kun Xu, Yuntao Jia, Hongbo Fu, Shimin Hu, and Chiew-Lan Tai. Spherical piecewise constant basis functions for all-frequency precomputed radiance transfer. IEEE Transaction on Visualization and Computer Graphics (TVCG). 14(2): 454-467, March/April, 2008. (IEEE TVCG Featured Article) [citation] Abstract: This paper presents a novel basis function, called spherical piecewise constant basis function (SPCBF), for precomputed radiance transfer. SPCBFs have several desirable properties: rotatability, ability to represent all-frequency signals, and support for efficient multiple product. By partitioning the illumination sphere into a set of subregions, and associating each subregion with an SPCBF valued 1 inside the region and 0 elsewhere, we precompute the light coefficients using the resulting SPCBFs. We run-time approximate BRDF and visibility coefficients with the same set of SPCBFs through fast lookup of summed-area-table (SAT) and visibility distance table (VDT), respectively. SPCBFs enable new effects such as object rotation in all-frequency rendering of dynamic scenes and on-the-fly BRDF editing under rotating environment lighting. With graphics hardware acceleration, our method achieves real-time frame rates. Keywords: spherical piecewise constant basis functions, real-time rendering, precomputed radiance transfer [Paper]; [Video] Chunxia Xiao, Shu Liu, Hongbo Fu, Chengchun Lin, Chengfang Song, Zhiyong Huang, Fazhi He, and Qunsheng Peng. Video completion and synthesis. Journal of Computer Animation and Virtual World (CAVW): Special Issue of Computer Animation & Social Agents (CASA 2008). 19(3-4): 341-353, 2008. Hongbo Fu, Oscar Kin-Chung Au, and Chiew-Lan Tai. Effective derivation of similarity transformations for implicit Laplacian mesh editing, Computer Graphics Forum (CGF). 26(1): 34-45, March 2007. (a previous version appeared as a technical report) [citation] Abstract: Laplacian coordinates as a local shape descriptor have been employed in mesh editing. As they are encoded in the global coordinate system, they need to be transformed locally to reflect the changed local features of the deformed surface. We present a novel implicit Laplacian editing framework which is linear and effectively captures local rotation information during editing. Directly representing rotation with respect to vertex positions in 3D space leads to a nonlinear system. Instead, we first compute the affine transformations implicitly defined for all the Laplacian coordinates by solving a large sparse linear system, and then extract the rotation and uniform scaling information from each solved affine transformation. Unlike existing differential-based mesh editing techniques, our method produces visually pleasing deformation results under large angle rotations or big-scale translations of handles. Additionally, to demonstrate the advantage of our editing framework, we introduce a new intuitive editing technique, called configuration-independent merging, which produces the same merging result independent of the relative position, orientation, scale of input meshes. Keywords: mesh editing, similarity invariant, Laplacian coordinates, configuration-independent, mesh deformation, mesh merging Project page
 Oscar Kin-Chung Au, Chiew-Lan Tai, and Ligang Liu, Hongbo Fu. Dual Laplacian editing for meshes, IEEE Transaction on Visualization and Computer Graphics (TVCG). 12(3): 386-395, MAY/JUNE 2006. (a previous version appeared as a technical report) [citation] Abstract: Recently, differential information as local intrinsic feature descriptors has been used for mesh editing. Given certain user input as constraints, a deformed mesh is reconstructed by minimizing the changes in the differential information. Since the differential information is encoded in a global coordinate system, it must somehow be transformed to fit the orientations of details in the deformed surface, otherwise distortion will appear. We observe that visually pleasing deformed meshes should preserve both local parameterization and geometry details. We propose to encode these two types of information in the dual mesh domain due to the simplicity of the neighborhood structure of dual mesh vertices. Both sets of information are nondirectional and nonlinearly dependent on the vertex positions. Thus, we present a novel editing framework that iteratively updates both the primal vertex positions and the dual Laplacian coordinates to progressively reduce distortion in parametrization and geometry. Unlike previous related work, our method can produce visually pleasing deformations with simple user interaction, requiring only the handle positions, not local frames at the handles. Keywords: mesh editing, local shape representation, click-and-drag interface, shape preserving, dual Laplacian Project page

## Conference and Exhibition

 Oscar Kin-Chung Au, Chiew-Lan Tai, Hongbo Fu, and Ligang Liu. Mesh editing with curvature flow Laplacian, Symposium on Geometry Processing 2005 (SGP 2005), Vienna, Austria, July, 2005 (Poster). [citation] Introduction: Differential coordinates are essentially vectors encoded in the global coordinate system. Since the local features on a mesh are deformed and rotated during editing, the differential coordinates must somehow be transformed to match the desired new orientations, otherwise distortion like shearing and stretching will occur. This transformation problem is basically a chicken-and-egg problem: the reconstruction of the deformed surface requires properly oriented differential coordinates, while the reorientation of these coordinates depend on the unknown deformed mesh. We present an iterative Laplacian-based editing framework to solve this transformation problem. The only user input required are the positions of the handles, not their local frames. Thus our system supports simple point handle editing. Our iterative updating process finds the best orientations of local features, including the orientations at the point handles. [Paper]; [Poster]
 Hongbo Fu, Chiew-Lan Tai, and Hongxin Zhang. Topology-free cut-and-paste editing over meshes, Geometric Modeling and Processing 2004 (GMP 2004), pages 173 – 182, Beijing, China, April, 2004. (Acceptance rate: 23.3%) [citation] Abstract: Existing cut-and-paste editing methods over meshes are inapplicable to regions with non-zero genus. To overcome this drawback, we propose a novel method in this paper. Firstly, a base surface passing through the boundary vertices of the selected region is constructed using the boundary triangulation technique. Considering the connectivity between the neighboring vertices, a new detail encoding technique is then presented based on surface parameterization. Finally, the detail representation is transferred onto the target surface via the base surface. This strategy of creating a base surface as a detail carrier allows us to paste features of non-zero genus onto the target surface. By taking the physical relationship of adjacent vertices into account, our detail encoding method produces more natural and less distorted results. Therefore, our elegant method not only can eliminate the dependence on the topology of the selected feature, but also reduces the distortion effectively during pasting. Keywords: topology-free, cut-and-paste, mesh editing [Paper]

## Book & Thesis

Hongbo Fu. Advanced programming in Delphi 6.0, Publishing House of Electronics Industry, March 2002, ISBN 7-900084-62-2 (in Chinese). Buy this book at dearbook.
Brief introduction: This book presents the essence of Delphi programming through a variety of advanced examples. The examples focus on the development of multimedia and Internet applications, for example, OpenGL, Indy components, XML, Web Broker and WebSnap techniques.

Hongbo Fu. Differential methods for intuitive 3D shape modeling, Ph.D. Thesis, 20 July 2007.
 Thesis Committee Thesis (PDF: 5.7M)
Hongbo Fu. Magnetocardiography signal denoising techniques. Undergraduate Thesis, July 2002.

## Technical Report

 Wei-Lwun Lu, Kevin. P. Murphy, James J. Little, Alla Sheffer, and Hongbo Fu.Coupled CRFs for estimating the underlying ground surface from airborne LiDAR data, UBC CS TR-2008-05, May 2008. Hongbo Fu. Differential methods for intuitive 3D shape modeling, PhD Thesis Proposal, 21 May 2007. Oscar Kin-Chung Au, Chiew-Lan Tai, Hongbo Fu, and Ligang Liu. Mesh editing with curvature flow Laplacian operator, Technical report, HKUST-CS05-10, July 2005. [citation] Abstract: Recently, differential information as local intrinsic feature descriptors has been used for mesh editing. Given certain user input as constraints, a deformed mesh is reconstructed by minimizing the changes in the differential information. Since the differential information is encoded in the global coordinate system, it must somehow be transformed to fit the orientation of details in the deformed surface, otherwise distortion will appear. We observe that visually desired deformed meshes should preserve both local parameterization and geometry details. To find suitable representations for these two types of information, we exploit certain properties of the curvature flow Laplacian operator. Specifically, we consider the coefficients of Laplacian operator as the parametrization information and the magnitudes of the Laplacian coordinates as the geometry information. Both sets of information are non-directional and non-linearly dependent on the vertex positions. Thus, we propose a new editing framework that iteratively updates both the vertex positions and the Laplacian coordinates to reduce distortion in parametrization and geometry. Our method can produce visually pleasing deformation with simple user interaction, requiring only the handle positions, not the local frames at the handles. In addition, since the magnitudes of the Laplacian coordinates approximate the integrated mean curvatures, our framework is useful for modifying mesh geometry via updating the curvature field. We demonstrate this use in spherical parameterization and non-shrinking smoothing.
 Hongbo Fu, Chiew-Lan Tai. Mesh editing with affine-invariant Laplacian coordinates, Technical report, HKUST-CS05-01, January 2005. Abstract: Differential coordinates as an intrinsic surface representation capture geometric details of surface. However, differential coordinates alone cannot achieve desirable editing results, because they are not affine invariant. In this paper, we present a novel method that makes the Laplacian coordinates completely affine-invariant during editing. For each vertex of a surface to be edited, we compute the Laplacian coordinate and implicitly define a local affine transformation that is dependent on the unknown edited vertices. During editing, both the resulting surface and the implicit local affine transformations are solved simultaneously through a constrained optimization. The underlying mathematics of our method is a set of linear Partial Differential Equations (PDEs) with a generalized boundary condition. The main computation involved comes from factorizing the resulting sparse system of linear equations, which is performed only once. After that, back substitutions are executed to interactively respond to user manipulations. We propose a new editing technique, called pose-independent merging, to demonstrate the advantages of the affine-invariant Laplacian coordinates. In the the same framework, large-scale mesh deformation and pose-dependent mesh merging are also presented. Hongbo Fu. A survey of editing techniques on surface models and point-based models, PhD Qualifying Examination, 19 December 2003.

My co-authors (by alphabetical order)

 Hong Kong Oscar Kin-Chung Au (CityU), Long Quan (HKUST), Chiew-Lan Tai (HKUST) Mainland China Shimin Hu (Tsinghua), Ligang Liu (ZJU), Qunsheng Peng (ZJU), Yichen Wei (MSRA)(WHU) (Tsinghua)(ZJU) Taiwan Yu-Shuen Wang (NCKU), Tong-Yee Lee (NCKU) Canada (UBC), (UBC), (UBC), Vladislav Kraevoy (UBC)(UBC), Kevin Murphy (UBC), (UBC) (UBC) Germany Hans-Peter Seidel (MPII) Israel Daniel Cohen-Or (Tel Aviv) (Academic College of Tel-Aviv-Yaffo) United States Yuntao Jia (UIUC), Olga Sorkine (NYU)