Han Yu
Time
Friday, 11/10/23 at 12:30 PM - 1:00 PM (Central)
Title
Enhancing Data Quality and Representation: Novel Approaches in Contrastive Learning and Generative Diffusion Models for Time-Series Biobehavioral Data
Abstract
The increasing availability of time-series biobehavioral data offers immense potential for developing novel approaches to improve understanding of human behaviors and help develop healthcare applications such as disease diagnosis, health monitoring and activities recognition. Nowadays, deep learning has been utilized and shown promising performance in modeling these time-series data for these applications. However, challenges such as (1) a lack of high-quality labels and (2) noisy and non-stationary data sequences hinder the extraction of effective representations and development of robust models. We aim to address these challenges by developing and evaluating novel techniques that employ self-supervised contrastive learning and generative diffusion models for leveraging unlabeled samples and improving the qualities of the collected data.
First, I will introduce the LEAVES (Learning Views for Time-series data in Contrastive Learning) framework to address the challenge of optimizing data augmentation in contrastive learning. The existing methods often struggle to find the optimal augmentation policies with limited computational resources in contrastive learning. To tackle this issue, our framework employs reparameterization-based differentiable data augmentations and adversarial training. This approach allows automatic optimization of data augmentation parameters, resulting in improved performance on multiple datasets and shorter training times compared to previous methods.
Second, I will introduce a solution to address the challenge of improving data quality with a generating diffusion model. Prior methods usually neglected the non-stationary and multi-scale characteristics of time-series biobehavioral data when processing them with deep learning methods. Thus, we propose an adaptive wavelet transformation-based generative diffusion model. Our method enables imputing missing sequences, constructing high-resolution data from data with low sampling rates, and forecasting future sequences.
Bio
Han Yu is currently pursuing his Ph.D. in Electrical and Computer Engineering at Rice University. He works with Dr. Akane Sano at Computational Wellbeing Group and Rice Digital Health Initiative. His research interests lie in the intersection of deep learning and human health. Specifically, he is working on designing advanced deep learning techniques, such as representation learning and generative AI, to diagnose a spectrum of physical and mental health issues. Han’s work has been recognized at venues including ACM IMWUT, Machine Learning for Health (ML4H), NeurIPS Learning from Time Series for Health (TS4H) Workshop, Affective Computing and Intelligence Interaction (ACII), and IEEE Biomedical and Health Informatics.