Research
Overview
In the Computational Behavior Analysis Lab, we study and develop methods that allow us to automatically capture and analyze human behaviors. Capturing means we need to think about what kind of sensors we can use to record human behaviors. In this case, human behaviors are all kinds of behaviors that are linked to people’s physical movements.
We are trying to solve applied machine learning problems in these sub areas: human activity recognition (HAR) using wearables, HAR using nearables, and vision-based activity recognition. Wearables are smart electronic devices such as smartphones and smartwatches that can be worn on the body. Nearables are sensors that are placed near humans, usually in a smart home setting. While we use the machine learning methods differently, one common challenge for all machine learning fields is the lack of labeled data. Specifically, in a ubiquitous computing setting, data annotations are extremely expensive because of issues with privacy and ethics. Hence, we are motivated to find various solutions to overcome the challenge of the lack of labeled data.
Funding
We are grateful to our funders for supporting out work:
- National Science Foundation (NSF)
- National Institute of Health (NIH)
- Georgia Tech
- Emory University
- CISCO
- Ford
- Hitachi
- Intel
- KDDI
- NVIDIA
- Optum
- Oracle
- Siemens
Machine Learning for Wearables-Based Human Activity Recognition (HAR)
We are trying to solve applied machine learning problems in these sub areas: human activity recognition (HAR) using wearables, HAR using nearables, and vision-based activity recognition. Wearables are smart electronic devices such as smartphones and smartwatches that can be worn on the body. Nearables are sensors that are placed near humans, usually in a smart home setting. While we use the machine learning methods differently, one common challenge for all machine learning fields is the lack of labeled data. Specifically, in a ubiquitous computing setting, data annotations are extremely expensive because of issues with privacy and ethics. Hence, we are motivated to find various solutions to overcome the challenge of the lack of labeled data.
One way to combat the lack of labeled data is to reduce the amount of labeled data needed for model training. Self-supervised learning is a field where active research is happening to reduce the need for labeled data. Self-supervised learning involves pre-training the model on some pretext tasks that allow the model to learn representations of the data, which will help with the real(downstream) task. The pretext task utilizes a large amount of unlabeled data, which is relatively easy to collect. An example of a pretext task is masked reconstruction. In masked reconstruction, certain parts of the unlabeled data are masked, and the model is trained to reconstruct the masked parts of the data. After pre-training, the model is fine-tuned by training the model on the real task using labeled data. The key idea is that self-supervised models can achieve comparable performances to fully-supervised ones while using far less labeled data than fully supervised ones. In situations where labeled data is extremely limited, self-supervised models can outperform fully-supervised models.
Members:
- Harish
- Sourish
- Megha
Papers:
- Assessing the State of Self-Supervised Human Activity Recognition Using Wearables
- Contrastive Predictive Coding for Human Activity Recognition
- Masked reconstruction based self-supervision for human activity recognition
Machine Learning for Human Activity Recognition (HAR) in Smart Homes
Nearable sensing is another subarea that we are interested in. Instead of attaching wearable sensors on human’s body, we put multiple sensors in a space to detect human activity. We focus on activity monitoring behavior analysis by observing routines of residence in smart homes using nearable sensors. We solve certain problems in nearable sensing in a smart home setting. An example is to bootstrap a HAR model for smart homes. We used data collected by dozens of movement sensors, door sensors, and temperature sensors in a house. The model selects a small subset of data from the unlabeled data collected from the sensors, and the model asks the user to label the selected data. This decision making process is the key idea of active learning. Using the data collected from the sensors, we trained a model to detect multiple human activities in houses.
Members:
- Shruthi
- Yasutaka
- Sri
Papers:
Behavior Analysis for Health Assessments
Health domain is another very popular topic as we talk about human activity analysis. Snack detection is a potential application. IMU sensors embedded in earbuds are used to collect moving data while people are eating certain snacks, and a model can be trained to detect any snacking behavior of humans. Dataset is collected in a semi-naturalistic setting where participants can select food to snack on, and annotations are added manually. As an implication, this detection system can be utilized to further analyze people’s mental and physical health. Other than collecting data from participants directly, there is also some research that is using available data. One project is to look at students’ mental well-being by leveraging different kinds of sensing equipment data available. We are trying to find how the behaviors are correlated with different mental well-being and how the contextual data can inform us. In the future, the research will focus on using the snacking detection system for other kinds of eating centric applications, and looking into the problems on campus such as food insecurities.
Members:
- Mehrab
- Harish