Invited Speakers of CGIP 2023




Prof. Hongmin Gao
Hohai University, China

Hongmin Gao received the B.S. degree in communication engineering from Hohai University, Nanjing, China, in 2006, and the Ph.D. degree in computer application technology from Hohai University, Nanjing, China, in 2014. He is currently a Professor and doctoral supervisor with the College of Computer and Information, Hohai University. He is the deputy director of Jiangsu marine monitoring equipment and data processing engineering center. He also has been approved for the "333 Project" of Jiangsu Province as young and middle-aged academic leaders in 2022. In the last five years, he was undertaking two research projects supported by the National Natural Science Foundation of China (NSFC), one research project supported by Jiangsu Natural Science Foundation, one research project supported by transformation of scientific and technological achievements in Jiangsu Province and participating the National Key R&D Program of China. Part of the research results have been applied in water resource flood disaster monitoring and operation decision supporting System of water conservancy, which won the second prize of Jiangxi Province Science and Technology Advancement Reward in 2018. His research interests include deep learning, information fusion, and image processing in remote sensing.

Speech Title: "Deep Learning Approaches for Hyperspectral Image Classification"

Abstract: Advances in computing technology have fostered the development of new and powerful deep learning (DL) techniques, which have demonstrated promising results in a wide range of applications. Particularly, DL methods have been successfully used to classify remotely sensed data collected by Earth Observation (EO) instruments. Hyperspectral imaging (HSI) is a hot topic in remote sensing data analysis due to the vast amount of information comprised by this kind of images, which allows for a better characterization and exploitation of the Earth surface by combining rich spectral and spatial information. This report focuses on the topic of hyperspectral image classification based on deep learning approaches. This is the important part of our previous work, mainly including mixed depth-wise convolution, dual-branch attention and hybrid convolutional and transformer network. Experimentally, compared with traditional methods, aforementioned deep learning models can obtain more useful information from remote sensing images, thus improving classification accuracy.

Prof. Issei Fujishiro
Keio University, Japan

Issei Fujishiro is currently a professor of information and computer science at Keio University. Before joining Keio in 2009, he had held a faculty position at the University of Tokyo, University of Tsukuba, Ochanomizu University, and Tohoku University. He received his Doctor of Science from the University of Tokyo in 1988. He has 37-year career in the field of visual computing, with a particular focus on modeling paradigms and shape representations, applied visualization design and lifecycle management, and smart multi-modal ambient media. He has served as an associate editor for several international journals, including IEEE Transactions on Visualization and Computer Graphics and Elsevier Computers and Graphics and Journal of Visual Informatics. He chaired 37 international conferences, including CG International 2017, IEEE VIS 2018/2019 (SciVis), and Cyberworlds 2019. He was the President of Institute of Image Electronics Engineers of Japan (IIEEJ) and Visualization Society of Japan, and a Vice President of the Society for Art and Science. He is presently appointed as a member of Science Council of Japan. He is a fellow of the Japan Federation of Engineering Societies and Information Processing Society of Japan and an honorary member of IIEEJ. He is a 2021 inductee into IEEE Visualization Academy.

Speech Title: "Psychologically Based Stereoscopic Viewing"

Abstract: Inspired by the trick artworks of young Japanese artist Hideyuki Nagai, we developed a simple, naked-eye stereoscopic viewing system for personal use. The system is equipped only with the orthogonal arrangement of two general-purpose display monitors together with a web camera and induces motion parallax by tracking the viewer’s eyes to update the state of anamorphosis, known as monocular illusion expression. The stereoscopic effect perceived by the viewer may be degraded due to binocular disparity, especially when the system is being used with small display monitors. It has been empirically proven that the sense of presence can be improved by placing a so-called Cyclopean eye at a specific position on the line between the two eyes to obtain pinpointed anamorphosis. The system, however, experiences a problem wherein displayed objects may be either partially beyond the rendering area or completely missing, depending on the object’s position relative to the user’s viewpoint, resulting in a reduced stereoscopic effect. We attempt to address this problem by employing a so-called frame break method that has recently been used in filmmaking and advertisement. I report on a trial porting of the system to a commercially available laptop PC with a foldable display.

Assoc. Prof. Yiyu Cai
Nanyang Technological University, Singapore

Assoc. Prof. Yiyu Cai did his PhD training in Engineering, MSc training in Computer Graphics & Computer-aided Geometry Design, and BSc training in Math. He is currently a tenured faculty with the School of Mechanical & Aerospace Engineering in NTU and holds a joint appointment with NTU's Institute for Media Innovation. Prior to that, he was a R&D specialist with the Kent Ridge Digital Labs, a senior software engineer with the Center for Information-enahnced Medicine (CIeMed) - a joint venture between John Hopkins Medical School and the Institute of System Sciences, and a lecturer with Zhejiang University. He has been doing interdisciplinary research related to Interactive Digital Media (IDM). His research interest includes 3D based design, simulation, serious games, virtual reality, etc. He is also active in IDM application research for Engineering, Bio & Medical Sciences, Education, Arts, etc. Together with his students or collaborators, he has edited 3 books and 4 journal special issues, and published over 160 papers in peer-reviewed international journals or international conferences.

Speech Title: "Automatic Reconstruction of Mechanical and Electric Plumbing"

Abstract: Mechanical, electrical, and plumbing (MEP) system plays a crucial role in modern buildings. Often MEP system once constructed requires regular maintenance to avoid failure which can cause significant impacts operationally, economically, and even environmentally. Currently, modelling of existing MEP system is mostly a manual and tedious process. This talk will present a novel solution that reconstructs MEP systems from LiDAR scanned point cloud data. Our solution requires no additional data other than the unstructured point cloud with XYZ fields, no data preprocessing, and no prior knowledge of the pipe directions or dimensions. A novel deep learning network, PipeNet, is designed to detect pipes regardless of the size of the input data and the scale of the target scene, and predict the pipe centerline points together with other pipe parameters. The pipe model is then reconstructed through line fitting, refinement, and graph-based connectivity analysis constrained by domain knowledge to maximize the coherence of the piping system model. The final output is converted to the Industry Foundation Classes (IFC) format which is neutrally acceptable in the Building Information Modelling (BIM) industry. The solution is validated on both synthetic and actual scan data, and the results demonstrate its robustness, fast speed, and high recognition rate and precision.