Opening Keynote: Paul Lukowicz
"Acoustic Sensing Beyond Scene Recognition"
Acoustic sensing is often motivated by the richness of sounds in our environment which---just like blind people do---can be used to detect a very broad range of situation. While such Acoustic Scene Analysis is without doubt an attractive approach sound based sensing offer many other more non-obvious use cases such as body sound sensing, object property analysis and relative object localization. The talk will explore such use cases also addressing the question of power consumption and privacy concerns related to sound based sensing.
Paul Lukowicz is Full Professor of AI at the Technical University of Kaiserslautern in Germany where he is heading the Embedded Intelligence group at DFKI. From 2006 till 2011 he has been full Professor (W3) of Computer Science at the University of Passau. He has also been a senior researcher (“Oberassistent”) at the Electronics Laboratory at the Department of Information Technology and Electrical Engineering of ETH Zurich Paul Lukowicz has MSc. (Dipl. Inf.) and a Ph.D. (Dr. rer nat.) in Computer Science a MSc. in Physics (Dipl. Phys.). His research focus are context aware ubiquitous and wearable systems including sensing, pattern recognition, system architectures, models of large scale self-organized systems, and applications. Paul Lukowicz coordinates the FP7-FET SOCIONICAL projects, is Associate Editor in Chief of IEEE Pervasive Computing Magazine, and has been serving as TPC Chair of a number of international events in the area.
Panel Keynote: Cheng Zhang
"Comprehending Human Behaviors with Everyday Wearables using AI-powered Sound"
Despite the rapid advancement of AI, computers’ ability to comprehend human behaviors remains limited. The primary obstacle lies in the absence of suitable sensing technologies capable of capturing and interpreting high-quality behavioral data in everyday settings. In this talk, I will share my research on the development of everyday wearables using AI-powered active acoustic sensing that are minimally-obtrusive, privacy-aware, and low-power, yet capable of capturing and comprehending various body movements and poses that humans employ in their everyday activities. First, I will show how these active acoustic sensing technologies can empower various everyday wearable form factors, including wristbands, necklaces, earphones, headphones, and glasses, to track essential body postures, such as facial expressions, gaze, finger poses, limb poses, as well as gestures on teeth and tongue. Then, I will demonstrate how, when paired with state-of-the-art AI, these everyday wearables can revolutionize how computers comprehend human behaviors. Specifically, I will focus on applications related to activity recognition, accessibility, and health sensing. Finally, I will discuss the prospects and challenges associated with the integration of AI-powered sound and wearables to support users in the future of everyday computing.
Cheng Zhang is an Associate Professor (with Tenure) in Information Science and Computer Science (Field Member) at Cornell University, where he directs the Smart Computer Interfaces for Future Interaction (SciFi) Lab. He received his Ph.D. in Computer Science at Ubicomp Lab at the Georgia Institute of Technology, where he was advised by Dr. Gregory Abowd (CS) and Dr.Omer Inan(ECE).
His research focuses on integrating human-centered AI with advanced sensing technologies to empower everyday wearables in comprehending human behavior in real-world environments, with the goal of better supporting users. This work involves: (1) Creating low-power, minimally intrusive, and privacy-conscious everyday wearables that significantly lower the barrier to collecting high-quality human behavior data in natural settings; (2) Designing AI-powered sensing algorithms and systems that prioritize a human-centered understanding of behavior in everyday contexts; and (3) Advancing high-impact applications in areas like accessibility and health to better support users in their daily lives.