The CBA lab will attend and participate in this year’s Ubicomp / ISWC conference, which will happen from October 8 through 13 (Cancun, Mexico). Come find us at the conference and let’s have a chat on all things Ubicomp and how we can work together.

Overview

  • 6 papers
  • 1 best paper award nomination
  • 1 tutorial
  • Participation in one symposium (with TP as panelist) – Genai4PC
  • PACM IMWUT editorial board (TP)
  • Steering committee ISWC (HH, TP)

Papers

Leng et al., ISWC 2023a

  • Paper Title: Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition
  • Award: Nominated for Best Paper Award (Judges will select the final awardee based on the final paper quality and presentation of nominated papers at the conference)
  • Authors: Zikang Leng, Hyeokhyen Kwon, Thomas Ploetz
  • Type of paper: ISWC Note
  • Summary: Training a human activity recognition (HAR) model to recognize human activities using sensors requires a lot of labeled sensor data, which is costly to obtain. In this work, we introduce a system to automatically generate synthetic labeled sensor data effortlessly. To do so, we first use ChatGPT to generate textual descriptions of various ways people can perform some activity. Then the generated textual descriptions are converted into 3D animations of human movement using a motion synthesis model. Lastly, the 3D animations of human movement can be converted to sensor data using signal processing techniques. We benchmarked our system on three common human activity recognition datasets. We found that the use of the sensor data generated by our system greatly improved the performance of the HAR model.

Leng et al., ISWC 2023b

  • Paper Title: On the Utility of Virtual On-body Acceleration Data for Fine-grained Human Activity Recognition
  • Authors: Zikang Leng, Yash Jain, Hyeokhyen Kwon, Thomas Ploetz
  • Type of paper: ISWC Note
  • Summary: Previous studies have shown that synthetic sensor data extracted from videos using IMUTube is beneficial for training complex and robust human activity recognition (HAR) models. IMUTube was only tested on activities with large body movements like gym activities. Yet, life is complex and many daily activities only involve subtle movements. In this work, we first introduce a metric called motion subtlety index (MSI) to measure the subtlety of movement of activities in videos by tracking local pixel movements around sensor locations. Using MSI, we determined a list of activities with subtle movements. We evaluated IMUTube on these activities to explore which activities IMUTube works well on. In the end, we found a correlation between the subtlety of the activities and how well IMUTube worked with IMUTube working well on the not so subtle activities and poorly on the very subtle activities.

Leng et al., Ubicomp 2023a

  • Title: On the Benefit of Generative Foundation Models for Human Activity Recognition
  • Authors: Zikang Leng, Hyeokhyen Kwon, Thomas Ploetz
  • Type of paper: GenAI4PC Position Paper
  • Summary: In human activity recognition (HAR), the limited availability of annotated data presents a significant challenge. Drawing inspiration from the latest advancements in generative AI, including Large Language Models (LLMs) and motion synthesis models, we believe that generative AI can address this data scarcity by autonomously generating synthetic sensor data from text descriptions. Beyond this, we spotlight several promising research pathways that could benefit from generative AI for the community, including the generating benchmark datasets, the exploration of hierarchical structures within HAR, and applications in health sensing.

Ahmad et al., ISWC 2023

  • Paper Title: Challenges in Using Skin Conductance Responses for Assessments of Information Worker Productivity
  • Authors: Anam Ahmad, Thomas Ploetz
  • Type of paper: ISWC Note
  • Summary: One often hears that tackling work with “fresh eyes” after a self-initiated break can help with roadblocks and boost personal productivity. But when exactly should you take this break? Some productivity techniques like the Pomodoro set a fixed interval of 25 minutes; in this work we assess the feasibility of using a person’s Electrodermal Activity (EDA) to predict opportune moments for breaks. Since EDA is a good biomarker for affective states, it finds use in measuring engagement in learning environments and parent-child dyadic social interactions. To evaluate EDA, we conducted a user study where participants work on a word generation task with tight constraints. We controlled for the opportunity to self-interrupt and switch to another such task for a short break. While there were no correlations between EDA and these breaks, we found evidence that measuring EDA from the toes is significantly better than the commonly-used wrist location and identified gaps in processing techniques. In this paper, we share and reflect on our rationale towards a consensus in EDA techniques for practical settings.

Dhekane et al., ISWC 2023

  • Paper Title: How Much Unlabeled Data is Really Needed for Effective Self-Supervised Human Activity Recognition?
  • Authors: Sourish Gunesh Dhekane, Harish Haresamudram, Megha Thukral, Thomas Ploetz
  • Type of paper: ISWC Note
  • Summary: In recent years, the market for wrist-worn smart devices, like Apple Watch & Fitbit, has grown exponentially as they are able to perform a plethora of day-to-day tasks very efficiently. In addition, they also provide an additional layer of functionality in the fitness & healthcare domain. These functionalities often depend on accurate recognition of human activities, which is enabled by using machine learning algorithms. These algorithms require large amounts of annotated data, sometimes even from the user, in order to be trained. This need of data annotations is largely eliminated by using self-supervised learning, yet, it still assumes access to large unannotated datasets. In this work, we analyzed standard self-supervised algorithms and stress-tested them based on their unannotated data requirements. Our analysis indicated that even as less as ~15 minutes of data was sufficient to perform the self-supervised training such that the performance obtained is similar to using larger data amounts. Thus, we empirically established that self-supervised algorithms require extremely less data amounts to perform effective human activity recognition in a wrist-worn smart device setting.

Shao et al., Ubicomp 2023 (PACM IMWUT)

  • Paper Title: ConvBoost: Boosting ConvNets for Sensor-based Activity Recognition
  • Authors: Shuai Shao, Yu Guan, Bing Zhai, Paolo Missier, Thomas Ploetz
  • Type of paper: PACM IMWUT
  • Summary: Human activity recognition (HAR) is one of the core research themes in ubiquitous and wearable computing. With the shift to deep learning (DL) based analysis approaches, it has become possible to extract high-level features and perform classification in an end-to-end manner. Despite their promising overall capabilities, DL-based HAR may suffer from overfitting due to the notoriously small, often inadequate, amounts of labeled sample data that are available for typical HAR applications. In response to such challenges, we propose ConvBoost – a novel, three-layer, structured model architecture and boosting framework for convolutional network based HAR. Our framework generates additional training data from three different perspectives for improved HAR, aiming to alleviate the shortness of labeled training data in the field. Specifically, with the introduction of three conceptual layers–Sampling Layer, Data Augmentation Layer, and Resilient Layer–we develop three “boosters”–R-Frame, Mix-up, and C-Drop–to enrich the per-epoch training data by dense-sampling, synthesizing, and simulating, respectively. These new conceptual layers and boosters, that are universally applicable for any kind of convolutional network, have been designed based on the characteristics of the sensor data and the concept of frame-wise HAR. In our experimental evaluation on three standard benchmarks (Opportunity, PAMAP2, GOTOV) we demonstrate the effectiveness of our ConvBoost framework for HAR applications based on variants of convolutional networks: vanilla CNN, ConvLSTM, and Attention Models. We achieved substantial performance gains for all of them, which suggests that the proposed approach is generic and can serve as a practical solution for boosting the performance of existing ConvNet-based HAR models. This is an open-source project, and the code can be found at https://github.com/sshao2013/ConvBoost

Haresamudram et al., Ubicomp 2023

  • Tutorial Title: Solving the Sensor-Based Activity Recognition Problem (SOAR): Self-Supervised, Multi-Modal Recognition of Activities from Wearable Sensors
  • Authors: Harish Haresamudram (Georgia Institute of Technology), Sungho Suh (RPTU Kaiserslautern-Landau and DFKI), Chi Ian Tang (Cambridge University), Paul Lukowicz (RPTU Kaiserslautern-Landau and DFKI), Thomas Ploetz (Georgia Institute of Technology)
  • Type of paper: Tutorial
  • Summary: Feature extraction lies at the core of wearable sensor-based Human Activity Recognition (HAR): the automated inference of what activity is being performed. Traditionally, the HAR community used statistical metrics and distribution-based representations to summarize the movement present in windows of sensor data into feature vectors. More recently, learned representations have been used successfully in lieu of such handcrafted and manually engineered features. In particular, the community has shown substantial interest in self-supervised methods, which leverage large-scale unlabeled data to first learn useful representations that are subsequently fine-tuned to the target applications. In this tutorial, we focus on representations for single-sensor and multi-modal setups, and go beyond the current de facto of learning representations. We also discuss the economic use of existing representations, specifically via transfer learning and domain adaptation. The proposed tutorial will introduce state-of-the-art methods for representation learning in HAR, and provide a forum for researchers from mobile and ubiquitous computing to not only discuss the current state of the field but to also chart future directions for the field itself, including answering what it would take to finally solve the activity recognition problem.