Keynote Speakers


Prof. James Kwok, IEEE Fellow, Hong Kong University of Science and Technology, Hongkong
 

Short bio: Prof. Kwok is a Professor in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology. Prof. Kwok served / is serving as an Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, Neurocomputing, Artificial Intelligence Journal, International Journal of Data Science and Analytics, and Action Editor of Machine Learning. He is also serving as Senior Area Chairs of major machine learning / AI conferences including NeurIPS, ICML and ICLR. He is recognized as the Most Influential Scholar Award Honorable Mention for "outstanding and vibrant contributions to the field of AAAI/IJCAI between 2009 and 2019". He is an IEEE Fellow, and the IJCAI-2025 Program Chair.

Speech Title: Unlock Your Potential: Achieving Multiple Goals with Ease
Abstract: Multi-objective optimization (MOO) aims to optimize multiple conflicting objectives simultaneously and is becoming increasingly important in deep learning. However, traditional MOO methods face significant challenges due to the non-convexity and high dimensionality of modern deep neural networks, making effective MOO in deep learning a complex endeavor.

In this talk, we address these challenges in MOO for several deep learning applications. First, in multi-task learning, we propose an efficient approach that learns the Pareto manifold by integrating a main network with several low-rank matrices. This method significantly reduces the number of parameters and helps extract shared features. We also introduce preference-aware model merging, which uses MOO to combine multiple models into a single one, treating the performance of the merged model on each base model's task as an objective. During the merging process, our parameter-efficient structure generates a Pareto set of merged models, each representing a Pareto-optimal solution tailored to specific preferences. Finally, we demonstrate that pruning large language models (LLMs) can be framed as a MOO problem, allowing for the efficient generation of a Pareto set of pruned models that illustrate various capability trade-offs.

 


Prof. Yen-Wei Chen, Ritsumeikan University, Japan
 

Short bio: Yen-Wei Chen received the B.E. degree in 1985 from Kobe Univ., Kobe, Japan, the M.E. degree in 1987, and the D.E. degree in 1990, both from Osaka Univ., Osaka, Japan. He was a research fellow with the Institute for Laser Technology, Osaka, from 1991 to 1994. From Oct. 1994 to Mar. 2004, he was an associate Professor and a professor with the Department of Electrical and Electronic Engineering, Univ. of the Ryukyus, Okinawa, Japan. He is currently a professor with the college of Information Science and Engineering, Ritsumeikan University, Japan. He is the founder and the first director of Center of Advanced ICT for Medicine and Healthcare, Ritsumeikan University, Japan. Since April 2024, he has been a Foreign Fellow of the Engineering Academy of Japan.
His research interests include medical image analysis, computer vision and computational intelligence. He has published more than 300 research papers in a number of leading journals and leading conferences including IEEE Trans. Image Processing, IEEE Trans. Medical Imaging, CVPR, ICCV, MICCAI. He has received many distinguished awards including ICPR2012 Best Scientific Paper Award, 2014 JAMIT Best Paper Award. He is/was a leader of numerous national and industrial research projects. In recent years, Professor Yen-Wei Chen has consistently been ranked among the world’s top 2% of scientists, both for the most recent year and over his entire career, according to the Stanford/Elsevier rankings.

Speech Title: Towards Accurate AI-Based Segmentation of Biomedical Images
Abstract: Recently, Deep Learning (DL) has played an important role in various academic and industrial domains, especially in computer vision and image recognition. Although deep learning (DL) has been successfully applied to bio-medical image analysis, achieving state-of-the-art performance, few DL applications have been successfully implemented in real clinical settings. The primary reason for this is that the specific knowledge and prior information of human anatomy possessed by doctors is not utilized or incorporated into DL applications. In this keynote address, I will present our recent advancements in knowledge-guided deep learning for enhanced bio-medical image analysis. This will include two research topics: (1) our proposed deep atlas prior, which incorporates bio-medical knowledge into DL models; (2) language-guided bio-medical image segmentation, which incorporates the specific knowledge of doctors as an additional language modality into DL models.

 


Prof. Ari Aharari, Sojo University, Kumamoto, Japan
 

Short bio: He received M.E. and PhD in Industrial Science and Technology Engineering and Robotics from Niigata University and Kyushu Institute of Technology, Japan in 2004 and 2007, respectively. In 2004, he joined GMD-JAPAN as a Research Assistant. He was Research Scientist and Coordinator at FAIS- Robotics Development Support Office from 2004 to 2007. He was a Postdoctoral Research Fellow of the Japan Society for the Promotion of Science (JSPS) at Waseda University, Japan from 2007 to 2008. He served as a Senior Researcher of Fukuoka IST involved in the Japan Cluster Project from 2008 to 2010. In 2010, he became an Assistant Professor at the faculty of Informatics of Nagasaki Institute of Applied Science. Since 2012, he has been Associate Professor at the department of Computer and Information Science, Sojo University, Japan. He is currently professor at the department of Computer and Information Science, Sojo University, Japan.
His research interests are IoT, Robotics, IT Agriculture, Image Processing and Data Analysis (Big Data) and their applications. He is a member of IEEE (Robotics and Automation Society), RSJ (Robotics Society of Japan), IEICE (Institute of Electronics, Information and Communication Engineers), IIEEJ (Institute of Image Electronics Engineers of Japan).

Speech Title: Harmonizing Nature, Industry, and Safety: AI and IoT Approaches toward a Resilient and Sustainable Society
Abstract: Realizing a Sustainable Society (SS) requires a holistic approach that integrates environmental conservation, industrial efficiency, and social resilience. As Artificial Intelligence (AI) and IoT technologies evolve, their ability to bridge the physical and digital worlds becomes crucial for solving complex global challenges. In this keynote speech, I will discuss how AI-driven technologies can contribute to the Sustainable Development Goals (SDGs) through three distinct yet interconnected case studies: environmental rehabilitation, smart manufacturing, and disaster mitigation.

The first part of the talk focuses on "IoT-Based Monitoring in Mangrove Ecosystems," a collaborative project between my laboratory and our partner university in Phuket, Thailand. Mangroves are vital for marine biodiversity, coastal protection, and carbon sequestration but face rapid decline. Successful rehabilitation relies heavily on sitespecific knowledge, particularly hydrology, as mangroves are sensitive to tidal shifts, salinity, temperature, and storm resilience. We developed a mangrove-specific IoT framework and sensor prototype, verified via field testing in Phuket. This system collects onsite environmental data and transmits it to a cloud server, allowing stakeholders to assess conditions against mangrove health standards and make informed, timely decisions for the survival of young mangroves.

The second part introduces the latest initiatives at the Smart Society Innovation Laboratory, focusing on social and industrial implementation. I will present our work on Smart Factories, specifically AI-based quality control and IoT platform design, which aims to minimize waste and optimize energy consumption in manufacturing. Furthermore, I will discuss Disaster Prevention and Mitigation, introducing "Vehiclebased evacuee support systems" initiative. These projects demonstrate how AI can enhance safety and resilience in the face of natural disasters.

Through these diverse examples, this presentation aims to clarify the role of AI technologies not just as tools for efficiency, but as essential infrastructure for a truly sustainable and resilient society.

 

Invited Speakers


Assoc. Prof. Dr. Afizan Azman, Taylor’s University, Malaysia
 

Short bio: Afizan Bin Azman received a Ph.D. degree from Loughborough University, U.K. He is an Associate Professor at the Department of Computer Science and Engineering at Taylor’s University, Subang Jaya, Selangor, Malaysia, and the Director of the Impact Laboratory, Digital Innovation and Smart Society. His research primarily focuses on image processing, machine learning, and data analytics, with a strong emphasis on advancing academic research and practical applications in digital innovation and developing a smart society.

Speech Title: TALKBIM: Automated Recognition of Malaysian Sign Language Using Computer Vision for Real Time Assistive Communication in the Malaysian Deaf Community
Abstract: Sign language serves as a crucial communication medium for the deaf and hard-of-hearing communities. In Malaysia, the Malaysian Sign Language (MSL) is widely adopted; however, limited technological solutions exist to bridge communication gaps between MSL users and the general public. This research aims to develop a robust MSL detection system leveraging both traditional machine learning (ML) and modern deep learning (DL) approaches. Initially, multiple comprehensive datasets of MSL gestures has been curated manually with the help of experts, encompassing static hand signs and dynamic movements. Multiple ML algorithms, such as Support Vector Machines (SVM), Random Forests (RF), K-nearest neighbour (KNN), Decision Tree (DT) and Ensemble Learning based algorithms were trained and evaluated alongside DL architectures like Convolutional Neural Networks (CNNs) and multi-headed based models. Rigorous comparative analysis based on accuracy, precision, recall, computational efficiency, and model generalizability was conducted to determine the most effective detection method. The acceptable accuracy has been decided to be above 95% to select a model for next step. Subsequently, the best-performing model was optimized and integrated into a user-friendly Android application, capable of real-time MSL recognition and translation into text or speech outputs. The outcomes of this study are expected to contribute to the advancement of assistive technologies in Malaysia, promoting inclusivity and improving everyday communication for MSL users.

 


Assoc. Prof. Le Nguyen Quoc Khanh, Taipei Medical University (TMU), Taiwan
 

Short bio: I am currently an Associate Professor with the In-Service Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University (TMU), Taiwan. I am also a joint Associate Professor with the International Master/PhD Program in Medicine (IGPM-TMU), International Ph.D. Program for Cell Therapy and Regeneration Medicine (IPCTRM-TMU) and TMU Research Center for Artificial Intelligence in Medicine.
Prior to joining TMU, I was a Research Fellow at the Medical Humanities Research Cluster, School of Humanities, Nanyang Technological University (NTU), Singapore. I received my MS and PhD degree in the Department of Computer Science and Engineering, Graduate Program in Biomedical Informatics, Yuan Ze University (YZU), Taiwan.
 

Speech Title: Multi-modal learning for early diagnosis and prognostic modeling in glioblastoma
Abstract: Glioblastoma (GBM) is an extremely aggressive brain tumor where early diagnosis and accurate survival prediction remain major challenges due to strong radiological, histopathological, and molecular heterogeneity. This invited talk presents an AI-driven multi-modal framework that unifies early tumor grading with downstream prognostic modeling. First, a Vision Transformer (ViT) is applied to FLAIR MRI scans to classify glioma WHO grades 2–4, achieving F1-scores above 0.89 and providing interpretable attention maps that highlight tumor-relevant regions. To overcome the limitations of single-modality prediction, we integrate whole-slide histopathology with RNA-seq transcriptomics using an attention-based deep learning architecture. The multi-modal model significantly improves survival prediction, outperforms traditional Cox and random forest models, and robustly separates high- and low-risk patient groups across CPTAC-GBM and TCGA-GBM cohorts (log-rank p < 0.0001). This talk demonstrates how multimodal signal processing and deep learning can advance precision neuro-oncology and clinical decision support.

 


Dr. Gunasekar Thangarasu, IMU University, Malaysia
 

Short bio: Dr. Gunasekar Thangarasu is an academic and technology researcher specializing in Artificial Intelligence, Big Data Analytics, and Digital Health Innovation. He is currently contributing to the field of Digital Health and Health Informatics at the International Medical University (IMU), Malaysia, where he leads initiatives in AI-driven healthcare education, curriculum innovation, and applied research. He earned his PhD in Information Technology from Universiti Teknologi PETRONAS, with research focused on intelligent diagnostic systems powered by AI and data science. With over 15 years of experience in higher education and research leadership across Malaysia and India, he has previously directed academic programmes, collaborated with global institutions, and driven industry-aligned digital upskilling initiatives.
He has published 100+ scholarly works, including journal papers, conference articles, and book chapters, and actively serves as a reviewer and committee member for IEEE conferences and Scopus-indexed journals. His expertise spans machine learning, healthcare data analytics, cloud computing, blockchain, and digital transformation. He has been invited as a keynote speaker and expert panelist at global conferences in the United Kingdom, India, Malaysia, and the Asia-Pacific region, contributing thought leadership in emerging technologies, future healthcare intelligence, and innovation-driven digital ecosystems. His vision centers on advancing AI-empowered healthcare systems, human-centered innovation, and sustainable digital health talent development with a strong focus on translational research and real-world healthcare impact.

Speech Title: Ensemble AI Framework for Prostate Cancer Diagnosis and Segmentation
Abstract: Prostate cancer continues to be one of the most significant health challenges worldwide, with early and accurate diagnosis playing a critical role in improving treatment outcomes. Here presents an Ensemble AI Framework for Prostate Cancer Diagnosis and Segmentation, designed to enhance precision, robustness, and clinical reliability using MRI imaging. The framework begins with an optimized feature selection process using the Wild Horse Optimizer, ensuring that only the most informative prostate MRI features are used for analysis. These features are then processed through an ensemble of advanced deep learning models CNN, ResNet, and GAN architectures which collaboratively classify MRI scans as cancerous or non-cancerous. A performance-based voting mechanism consolidates model outputs to deliver a more accurate and stable diagnostic decision. For segmentation, the framework integrates the novel Dual Swin Transformer UNet, leveraging attention-driven multi-scale feature extraction to precisely delineate cancer-affected regions. This dual approach classification and segmentation provide a comprehensive AI-assisted pipeline supporting clinical decision-making. Evaluated across multiple performance metrics, the system demonstrates high accuracy, sensitivity, precision, and recall, showcasing its strong potential to advance early detection and personalized treatment planning in prostate cancer care.



Prof. Mu-Yen Chen, National Cheng Kung University, Taiwan

(2% top highly cited researcher by Stanford University (in Artificial Intelligence field)
(234 publications in Google Scholars, 8353 citations, 42 h-index)) 

Short bio: Dr. Chen is a Distinguished Professor of Engineering Science at National Cheng Kung University, Taiwan. He received his PhD from Information Management from National Chiao-Tung University in Taiwan. His current research interests include artificial intelligent, soft computing, data mining, deep learning, context-awareness, machine learning, and social network mining with more than 200 publications in these areas. He has co-edited several special issues in International Journals (e.g. IEEE Transactions on Engineering Management, IEEE Access, ACM Transactions on Management Information Systems, ACM Transactions on Sensor Networks, Computers in Human Behavior, Applied Soft Computing, Soft Computing, Information Fusion, Journal of Real-Time Image Processing, Sustainable Cities and Society, Neurocomputing, Supercomputing, Enterprise Information Systems, Journal of Medical and Biological Engineering, Computational Economics). He has served as Associate Editor of international journals [e.g. IEEE Transactions on Engineering Management, IEEE Access, Applied Soft Computing, Granular Computing, Human-centric Computing and Information Sciences, Journal of Information Processing Systems, International Journal of Social and Humanistic Computing] while he is an editorial board member on several SCI journals.