Thursday 08:30 – 12:00

08:30 – 08:40 Welcome & Introductory Remarks – Co-organizers (10 min)

08:40 – 09:20 Keynote Speaker #1 – Dr. Abhijit Sarkar, Virginia Tech Transportation Institute (40 min)

The Emerging Role of Multispectral Sensing: From Remote Health Monitoring to Road Safety

Abstract Multi-spectral data fusion and artificial intelligence is unlocking transformative capabilities across different domains. This talk will focus on two specific applications: remote health monitoring and automotive safety. We will begin by exploring the emerging role of multispectral sensing in enhancing accuracy, robustness, and fairness of remote photoplethysmography (rPPG), an unobtrusive monitoring method of cardiovascular and respiratory signals. This talk will offer a deep dive into the current state of rPPG research, highlighting key challenges such as sensitivity to skin tone, limitations to recovery of heart rate variability, and operating under ambient lighting. In parallel, we will briefly discuss how multispectral sensing and sensor fusion strategies are playing key role in the modern transportation system. Specifically, we will discuss current challenges of the LiDARs and camera system in autonomous driving applications, and their implications in road safety. Together these discussions will highlight exciting opportunities of spectral data fusion in shaping the future of intelligent sensing in both health and mobility.

09:20 – 09:40 Paper #3070 (Accepted) – Retrieval of Blood Volume Pulse Waveforms Using Multispectral Face Video Data (VT) (20 min)

09:40 – 09:55 Break (15 min)

09:55 – 10:15 Paper #1 (Invited) – Concealed Object Detection Using False-Color Near-Infrared Imaging (UGA) (20 min)

10:15 – 10:35 Paper #2 (Invited) – Hyperspectral Skin Analysis via Transformers (Mason) (20 min)

10:35 – 10:50 Break (15 min)

10:50 – 11:30 Keynote Speaker #2 – Dr. Alexandre Lussier, Resonon (40 min)

Avoiding “Garbage In / Garbage Out” in Hyperspectral Imaging

Abstract Hyperspectral imaging produces vast amounts of information, but meaningful insights can only be achieved when the underlying data are of high quality. No machine learning, deep learning, or AI model—no matter how sophisticated—can compensate for flawed or poorly collected data. For accurate and reliable analysis, datasets must be independent of both the capturing instrument and the lighting conditions during acquisition. In this talk, we will clarify the distinctions between raw, irradiance, and reflectance data, and discuss the practical steps and best practices necessary to ensure consistent, reproducible, and trustworthy results.

11:30 – 12:00 Panel Discussion – Interactive Q&A with participants and speakers (30 min)