CBA lab at Ubicomp / ISWC 2025
The CBA lab will attend and participate in Ubicomp / ISWC 2025, held from October 12–16, 2025 in Espoo, Finland. This year, we are organizing a workshop and presenting four IMWUT/Workshop publications. Come find us at the conference and let’s have a chat on all things Ubicomp and how we can work together.
Overview
- 4 papers
- 1 Keynote (TP at UbiSense)
- 1 Workshop (GenAI4HS)
- PACM IMWUT editorial board (TP)
- Serving on Doctoral Colloquium (TP)
Workshop
We are co-organizing the GenAI4HS: Generative AI for Human Sensing workshop.
🔗 https://genai4hs.github.io/
The workshop will be a half-day workshop (09:00 - 13:15) taking place on Sunday, 12 October 2025.
This workshop focuses on advancing the integration of Generative AI and foundational models in human sensing applications. It brings together researchers to discuss challenges and recent trends in applying these models to tasks like activity recognition, health monitoring, and behavior analysis. The workshop includes paper presentations, round-table discussions, emphasizing topics such as multimodal representation learning, LLM integration for ubiquitous computing, data simulation (e.g., text-to-motion), and the practical deployment of generative approaches in real-world settings.
Papers
📄 Layout-Agnostic Human Activity Recognition in Smart Homes through Textual Descriptions Of Sensor Triggers (TDOST)
Authors: Megha Thukral∗, Sourish Gunesh Dhekane∗ , Shruthi K. Hiremath, Harish Haresamudram , Thomas Plötz (Georgia Institute of Technology)
Published: PACM IMWUT (Ubicomp 2025)
📄 Read PDF
Summary:
This work presents a novel, layout-agnostic approach to human activity recognition (HAR) in smart homes by leveraging natural language descriptions of sensor events instead of raw sensor data. The method, called Textual Descriptions Of Sensor Triggers (TDOST), transforms sensor activations into descriptive text that provides contextual cues for activity inference. By using textual embeddings, the proposed models generalize across different smart home layouts without the need for retraining or adaptation leveraging generalizable repeesentation space of text encoders. Evaluations on Orange4Home and CASAS datasets show strong performance in unseen environments, with detailed analysis highlighting the contribution of each component to overall recognition accuracy.
📄 Past, Present, and Future of Sensor‑based Human Activity Recognition Using Wearables: A Surveying Tutorial on a Still Challenging Task
Authors: Harish Haresamudram (School of Electrical and Computer Engineering, Georgia Institute of Technology), Chi Ian Tang (Nokia Bell Labs, UK), Sungho Suh (Department of Artificial Intelligence, Korea University, Republic of Korea), Paul Lukowicz (RPTU Kaiserslautern-Landau and DFKI, Germany), Thomas Plötz (School of Interactive Computing, Georgia Institute of Technology)
Published: Published: PACM IMWUT (Ubicomp 2025)
📄 Read PDF
Summary:
This paper discusses the evolution of wearable sensor-based Human Activity Recognition (HAR), highlighting the transition from handcrafted features to complex models, while noting that progress on benchmarks has plateaued despite advances in data and computation. It argues that the field is poised for a paradigm shift through the integration of world knowledge from foundational models. Alongside this retrospective and forward-looking analysis, the authors offer a practical tutorial to guide practitioners in building real-world HAR systems, serving as a comprehensive resource for both newcomers and experts aiming to advance the field.
📄 Thou Shalt Not Prompt: Zero-Shot Human Activity Recognition in Smart Homes via Language Modeling of Sensor Data & Activities
Authors: Sourish Gunesh Dhekane, Thomas Plötz (Georgia Institute of Technology)
Published: GenAI4HS Workshop: In Companion of the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp Companion ’25)
📄 Read PDF
Summary:
This paper introduces a novel zero-shot approach to human activity recognition (HAR) in smart homes that avoids reliance on prompting large language models (LLMs). Instead, it models both sensor data and activity labels using natural language, generating textual summaries and descriptions which are embedded via a pre-trained sentence encoder. Activities are inferred by comparing the similarity between these embeddings, enabling generalizable recognition across varied smart home layouts without requiring labeled or unlabeled training data. The method demonstrates competitive performance on six benchmark datasets, surpassing or matching state-of-the-art LLM-based methods, and also supports few-shot learning extensions. This work highlights a privacy-preserving and robust alternative to LLM-dependent HAR systems.
📄 VISAR: Visualization and Interpretation of Sensor-based Activity Recognition for Smart Homes
Authors: Alexander Karpekove, Sonia Chernova, Thomas Plötz (Georgia Institute of Technology)
Published: XAI Workshop: In Companion of the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp Companion ’25)
📄 Read PDF
Summary:
This paper presents VISAR, a model-agnostic visualization and debugging system for smart-home human activity recognition (HAR) that addresses data scarcity and interpretability without altering underlying pipelines. VISAR renders sensor activations as spatial–temporal replays on a home floor plan and overlays predictions from supervised, unsupervised, transfer-learning, or anomaly-detection models. By aligning events with rooms, paths, and timelines, the tool enables expert-in-the-loop validation, rapid fine-grained annotation, and precise error tracing (e.g., mislocated sensors or mislabeled routines). Two case studies demonstrate utility: (1) efficient labeling of unsupervised clusters with high agreement, revealing nuanced sub-activities, and (2) longitudinal anomaly triage that distinguishes clinically relevant behavior shifts from contextual artifacts. VISAR advances transparent, trustworthy, and deployment-ready HAR by turning black-box outputs into actionable, domain-informed insights.
We look forward to engaging with the community at Ubicomp / ISWC 2025—see you there!