TxHCI Seminar Speakers

Wai Tong
Wai Tong ( Assistant Professor (Upcoming) • Texas A&M University )
Friday, 11/17/23 at 12:30 PM - 1:00 PM (Central) • Fall 2023

As the field of data analysis continues to evolve, researchers are increasingly turning to immersive technologies, such as augmented reality (AR) and virtual reality (VR), to enhance the data exploration and storytelling experience. While these technologies offer a range of benefits, including larger display areas and 3D rendering capabilities, the steep learning curve associated with immersive visualization and the lack of unique benefits of traditional workflows, such as exploiting existing expertise and precise input, have limited their widespread adoption. In this talk, I will share my work in combining traditional workflows with immersive technology to leverage the benefits of both sides for data exploration and storytelling. For data exploration, I will present two techniques, PC (traditional) + VR (immersive) and paper (traditional) + AR (immersive), to achieve effective visual analysis in work tasks and everyday situations. For data storytelling, I will present mobile video-filming (traditional) and AR (immersive) to simplify the complex process of embedding data visualization into short-form videos. Finally, I will discuss ongoing and future work in building hybrid immersive visualization systems that seamlessly integrate traditional visualization workflows with immersive technologies.


Han Yu
Han Yu ( PhD Student • Rice University )
Friday, 11/10/23 at 12:30 PM - 1:00 PM (Central) • Fall 2023

The increasing availability of time-series biobehavioral data offers immense potential for developing novel approaches to improve understanding of human behaviors and help develop healthcare applications such as disease diagnosis, health monitoring and activities recognition. Nowadays, deep learning has been utilized and shown promising performance in modeling these time-series data for these applications. However, challenges such as (1) a lack of high-quality labels and (2) noisy and non-stationary data sequences hinder the extraction of effective representations and development of robust models. We aim to address these challenges by developing and evaluating novel techniques that employ self-supervised contrastive learning and generative diffusion models for leveraging unlabeled samples and improving the qualities of the collected data. First, I will introduce the LEAVES (Learning Views for Time-series data in Contrastive Learning) framework to address the challenge of optimizing data augmentation in contrastive learning. The existing methods often struggle to find the optimal augmentation policies with limited computational resources in contrastive learning. To tackle this issue, our framework employs reparameterization-based differentiable data augmentations and adversarial training. This approach allows automatic optimization of data augmentation parameters, resulting in improved performance on multiple datasets and shorter training times compared to previous methods. Second, I will introduce a solution to address the challenge of improving data quality with a generating diffusion model. Prior methods usually neglected the non-stationary and multi-scale characteristics of time-series biobehavioral data when processing them with deep learning methods. Thus, we propose an adaptive wavelet transformation-based generative diffusion model. Our method enables imputing missing sequences, constructing high-resolution data from data with low sampling rates, and forecasting future sequences.


Georgie Qiao Jin
Georgie Qiao Jin ( PhD Student • University of Minnesota )
Friday, 10/27/23 at 12:30 PM - 1:00 PM (Central) • Fall 2023

Virtual reality (VR) has the potential to revolutionize the way we learn and educate, enhancing and supplementing the traditional learning experience by providing new ways to interact with information and people. However, its full potential in education has yet to be fully realized, as work in this space requires resolving cutting-edge technical challenges and addressing context-specific, user-centered, and pedagogical concerns in real-world settings. In this talk, I will introduce my research which is aimed at pursuing the vision of educational VR through 1) empirical investigations from a multi-stakeholder perspective to understand educational VR adoption and usage, and 2) immersive technologies that enhance social learning experiences. I will advocate key strategies to drive the future of educational VR: lowering the entry level of VR creation tools, fostering collaborative and social experiences, and emphasizing community engagement as the foundational element for widespread VR adoption.


Muhammad Hasham Qazi
Muhammad Hasham Qazi ( Visiting Student • Texas A&M University, Habib University )
Friday, 10/20/23 at 12:30 PM - 1:00 PM (Central) • Fall 2023

Training firefighters for real-world scenarios is not only costly but also challenging to execute comprehensively. Given the critical importance of thorough training against the backdrop of these significant expenses, there's a pressing need for innovative yet effective alternatives. This work delves into developing a modular Virtual Reality (VR) training platform tailored towards the experiential learning of firefighters. Designed to address the limitations of traditional hands-on training, this platform offers diverse and realistic fire simulation scenarios. Drawing from real-world events typically faced by firefighters in urban settings, the simulations are informed by existing literature, training manuals, and user feedback from online firefighting communities. Training in this VR environment exposes trainees to a wide range of potential issues and scenarios in urban environments, better preparing them for the challenges they might encounter in real-life firefighting emergencies. This project lays the foundation for developing modular simulations, providing principles that can be adapted and customized to suit specific training requirements.


Rawan Alghofaili
Rawan Alghofaili ( Assistant Professor • University of Texas at Dallas )
Friday, 10/13/23 at 12:30 PM - 1:00 PM (Central) • Fall 2023

Anyone who has witnessed the adoption of the Internet remembers the static and non-context-aware websites of the past. Compare that with the powerful engines behind the websites of today. These context-aware engines are equipped with machine learning and optimization algorithms that allow them to adapt and cater to their user's behavior and environment. This deep understanding of the user and their needs created a more personalized and efficient experience. Current AR/VR systems are not quite as static as the websites of yesteryear but still a long way to go from becoming as powerful and context-aware as browsing the web today. Rawan aspires to facilitate the road to achieving context-aware AR/VR systems that elegantly adapt their interactions to their user's behavior and environment. Rawan will discuss her work in AR/VR adaptive navigation aids, VR environment design via visual attention, and in-situ mobile AR content-authoring via 2D to 3D curve projection.


Meng Xia
Meng Xia ( Assistant Professor • Texas A&M University )
Friday, 10/06/23 at 12:30 PM - 1:00 PM (Central) • Fall 2023

Learners differ in many dimensions (e.g., knowledge, problem-solving strategies, motivation, self-regulation skills), and they may change along these dimensions over time. Making online learning tailored to each student to be maximally effective remains a grand challenge. Fortunately, massive online interaction log data (e.g., video watching, problem-solving, forum discussion) are captured, reflecting users' similarities and differences, which can be used to personalize online learning. The data-driven interfaces (e.g., data visualization) could serve as a bridge between AI algorithms and humans, especially learners and educators, to customize their needs. In this talk, I will introduce how to develop AI algorithms to model various learning data and, at the same time, design data-driven interfaces for learners and educators to find insights from learning data and turn them into actions for personalized online learning.


Momona Yamagami
Momona Yamagami ( Assistant Professor • Rice University )
Friday, 09/29/23 at 12:30 PM - 1:00 PM (Central) • Fall 2023

Biosignal interfaces that use electromyography sensors, accelerometers, and other biosensors as device inputs hold promise to improve accessibility for people with disabilities. However, generalized models that are not personalized to each individual’s physical characteristics, such as their physical ability, wrist circumference, and skin tone, may not perform well. Individualized interfaces that are personalized to the individual and their abilities could significantly enhance accessibility and usability of biosignal interfaces. In this talk, I discuss how continuous (i.e., 2-dimensional trajectory-tracking) and discrete (i.e., gesture) electromyography (EMG) interfaces can be personalized to the individual. For the continuous task, we used methods from game theory to iteratively optimize a linear model that mapped EMG input to cursor position with 7 participants without disabilities. We found that the participants quickly co-adapted to achieve high tracking performance. For the discrete task, we performed template-matching to map personalized gestures to specific device functions with participants with upper-body motor disabilities. Our participants had a wide variety of gestures that they performed that matched their abilities. We achieved over 85% classification accuracy with just 3 templates for 10 device functions. As biosignal interfaces become more commonly available, it is important to ensure that such interfaces have high performance across a wide spectrum of users and abilities.


Akib Zaman
Akib Zaman ( Ph.D. Student • UT Arlington )
Friday, 04/30/21 at 12:00 PM - 1:00 PM (Central) • Spring 2021

Although smart tools are increasingly becoming a part of the makerspace culture, their intended goal of supporting human creativity through technology is complicated by the diversity of user abilities, tools, and practices. The modern makerspace is a live environment, ever changing and evolving. If we construct systems using models of activity based on such an environment, it is difficult to maintain those systems over a period of time. I am focusing on understanding methods of augmenting the modern makerspace with smart tools, which aid the user in their work without obstructing in their process. We aim to normalize the use of digitally augmented tools in the creative spaces to enhance the user experience. We explore our design principles by focusing on the data collection process within the makerspace, using machine learning (ML) models to create interactions, and integrating the user in the workflow by making the system adaptable to changes. This is intended to solve the issue of maintainability and repairability of the system. In this WIP talk, I will present our work on two such systems - (1) an audio-based classifier for building and refining models of activity; (2) a tangible avatar to facilitate programming and debugging actions through capacitive touch sensing. Through this talk, I am looking to refine my PhD thesis proposal on how to evaluate the integration of ML models into intelligent systems. Specifically, I am looking to improve the motivation behind my design principles of smart tool maintainability and repairability and tie these into the larger research areas around makerspaces and human-AI interaction.


Himani Deshpande
Himani Deshpande ( Ph.D. Student • Texas A&M University )
Friday, 04/16/21 at 12:00 PM - 1:00 PM (Central) • Spring 2021

Hand-weaving is a beloved craft in history, holding promise for many opportunities in making from fat sheet fabrics to smart textiles. To afford new weaving experiences, we explore how 3D printed custom weaving tools interplay with different materiality, augmenting the design space of weaving. We propose novel weaving techniques enabled by 3D printed custom tools: (1) water-soluble draft to synchronize design intention and practice, (2) flexible warps to guide complex patterns and to shape resulting object, and (3) rigid global geometry for woven artifacts in 3D. EscapeLoom as a computational design tool enables users to employ various parameters in their computational design, and showcases many creative possibilities that move away from the traditional definition of a loom to dive into what more it can be.


Jonathan Avila
Jonathan Avila ( Ph.D. Student • UTEP )
Friday, 04/09/21 at 12:00 PM - 1:00 PM (Central) • Spring 2021

For the automated tuning of user interfaces, it can be helpful to have data on the places where users have difficulty. Detecting this is, in general, hard. For spoken dialog, with human agents or with dialog systems, this is usually done by soliciting explicit judgments, for example, with “on a scale from 1 to 5, how was I today?” However, this is intrusive and does not identify the specific places where dissatisfaction was felt. We propose instead to exploit the tendencies for speakers to continuously indicate their satisfaction/dissatisfaction level by changing the way they speak, either intentionally or unintentionally. To investigate, we collected a corpus of 147 mock merchant-customer conversations, each 1 to 3 minutes in length, including some in a scenario that prevented the speakers from reaching a mutually satisfactory outcome. We will report the results of applying machine learning models to prosodic features computed over this data. By late March, we expect to find better-than-baseline ability to classify a dialog as containing or not containing dissatisfaction and to have identified specific prosodic configurations that are generally useful as markers of dissatisfaction.


Hannah Cohoon
Hannah Cohoon ( Ph.D. Candidate • UT Austin )
Friday, 04/02/21 at 12:00 PM - 1:00 PM (Central) • Spring 2021

Open science is the convergence of several interrelated trends in academia that push for accessible, inclusive, and transparent research practices and technology. Advocates of open science often present the movement as leveraging modern technology to do better research—research that embodies the values and norms open science advocates take as fundamental to good science (i.e. Mertonian norms). Funders, journals, and institutions pursue policies that necessitate the use of open science technologies, but these top-down efforts are sometimes met with resistance from researchers who perceive open science values to be at odds with their own. Some open science technologies encourage users to behave openly by design, employing nudges or persuasion tactics to promote openness. However, the introduction of new technology often leads to conflicts between developers’ and users’ expectations for the system. By studying technology-led advocacy for open science and researchers’ responses to it, we can explore the role of values in shaping the future of science practice and we can understand how technology undermines or supports stakeholders’ agency to enact their own values. To that end, I propose the qualitative study of an open science platform designed to change users’ behavior: the Open Science Framework. In this WIP talk, I will present the background and motivation for this study and outline a plan for collecting trace data that logs developers’ and users’ interactions with the Open Science Framework. I look forward to discussing the value and difficulties of trace data collection and analysis; I further welcome feedback on the study design.


Shelbey Rolison
Shelbey Rolison ( Ph.D. Student • UT Austin )
Friday, 03/12/21 at 12:00 PM - 1:00 PM (Central) • Spring 2021

The data I plan to present was collected in support of the NSF CAREER grant mentioned above. The goal of this area of the grant is to study how self-trackers communicate about the gathering and analysis of data for health and productivity, and to make recommendations that will be beneficial for refining personal health interventions and for healthcare writ large. This study is part of NSF’s 21st century focus to better understand how our country can prepare for the future of work at the human-technology frontier (https://www.nsf.gov/eng/futureofwork.jsp). I just concluded the first round of data collection, consisting of 30 interviews with personal data practitioners of varying levels of commitment and technological prowess at approximately 45 minutes each, and 10 hours of observational data collected attending Zoom calls for an online personal analytics support community. I will present preliminary themes emerging from the data, including: (1) participants' frustration with and attempts to resist the influence of Big Tech on how meaning is created from their data; (2) how personal data may be used in a clinical interaction, and how a providers' response to it determines its possibilities for supporting patient empowered care; (3) the perceived value of manual vs. automated tracking; and (4) how the communication practice of "show and tell" with other self-trackers supports individuals in their personal data projects.   


Chen Liang
Chen Liang ( M.S. Student • Texas A&M University )
Friday, 03/05/21 at 12:00 PM - 1:00 PM (Central) • Spring 2021

Thingiverse hosts over 1.9 million 3D designs shared across the fabrication community, many of which are designed to interact with real-world objects and are customizable upon personal needs. However, it is challenging for novice users to discover relevant designs using textual information, comprehend what customizable parameters mean, where they are located on target objects, and how to measure them correctly. To solve these problems, we present an interactive system that uses a graph-based structure to represent adaptation designs to improve the search and exploration process, then guides end-users through the measurement process using a set of modular measurement methods for primitive shapes. Our system assists users without expert skills to conveniently discover, adjust, and reuse adaptive designs.


Xu Wang
Xu Wang ( Assistant Professor • University of Michigan )
Friday, 02/26/21 at 12:00 PM - 1:00 PM (Central) • Spring 2021

A challenge to meet the demand on higher education and professional development is to scale these educational opportunities while maintaining their quality. My work tackles this challenge by harnessing examples from existing resources to enable the creation of scalable and quality educational experiences.  In this talk, I will describe my work that contributes insights about developing effective learning at scale systems by leveraging the complementary strengths from peers, experts, and machine intelligence, differentiating it from existing systems that solely rely on machine or crowds of peers. Specifically, I’ll focus on a technique UpGrade, which uses student solution examples to semi-automatically generate multiple-choice questions for deliberate practice of higher order thinking in varying contexts. From experiments in authentic college classrooms, I show that UpGrade helps students gain conceptual understanding more efficiently and helps improve students' authentic task performance. Through an iterative design process with instructors, I demonstrate the generalizability of this approach and offer suggestions to improve the quality and efficiency of college instruction.


Rush Hoelscher
Rush Hoelscher ( Undergraduate Researcher • Texas A&M )
Friday, 02/05/21 at 12:00 PM - 1:00 PM (Central) • Spring 2021

Will aging individuals accept the convenience and safety features of Smart Home technology? According to the U.S. Census Bureau, Population Projections, there are currently 52 million people over 65 and almost a third do not have internet connection. A majority of these individuals would undoubtedly prefer to maintain their independence, but many will likely encounter health issues that limit their abilities to continue to live on their own without supporters who may not be able to be there full time. However, something as simple as voice command lights, phone calls, and entertainment could assist these individuals in staying safe and convenient with limited mobility requirements. This is where the problem arises, there many seniors that do not accept life changing smart home technology for fear of insecurity or lack of internet. My project has created a offline system that can be used to control a house and pair with alexa if internet access is achieved.


Anastazja Harris
Anastazja Harris ( Ph.D. Candidate • UT Austin )
Friday, 12/04/20 at 12:00 PM - 1:00 PM (Central) • Fall 2020

Advancing technology has increasingly automated the hiring process, resulting in computer algorithms mechanically processing application materials and evaluating applicants’ interview responses. Recently, artificial intelligence (AI) algorithms have been used to analyze applicants’ communication during online, asynchronous interviews to predict job-related qualities, including communication skills and personality traits. Many people claim that using AI decreases hiring bias citing algorithms as less biased than humans. However, research must address the perspective of potential applicants as interviewee perceptions are connected to interview performance and organizational attractiveness. This presentation will share pilot data from focus group interviews that explored college student job seekers' perceptions of AI interview technology. The discussion will focus on exploring perceptions from people with varying levels of understanding about artificial intelligence.


Kim Knight & Juan Llamas Rodriguez

(11/20) Migrant Steps

Kim Knight & Juan Llamas Rodriguez ( Associate Professor & Assistant Professor • UT Dallas )
Friday, 11/20/20 at 12:00 PM - 1:00 PM (Central) • Fall 2020

The Migrant Steps Project draws inspiration from the popularity of step tracker applications and the overabundance of migrant narratives in mainstream and social media. The project mobilizes step tracking applications as the point where users come into contact with re-contextualized narratives and popular archives in order to counter alarmist, xenophobic media rhetoric about “hordes” of migrants that “flow” across borders. We aim to draw attention and incite critical reflection on the user’s practice of walking as the entry point to re-engaging with narratives about migration. By relying on the user’s walking as the point of interaction with digital narratives about migration, the project will draw attention to the physical dimension of migration and to the importance of words and concepts to making sense of social phenomena.


Alex Berman
Alex Berman ( Ph.D. Candidate • Texas A&M University )
Friday, 11/13/20 at 12:00 PM - 1:00 PM (Central) • Fall 2020

The emergence of affordable digital fabrication technologies like 3D printing may drastically change how physical goods can be created anywhere, shifting the focus of fabrication from the distribution of physical goods towards the distribution of digital designs. However, there are many challenges toward fabricating, modifying, and creating digital designs that inhibit many from utilizing 3D Printing. Observations of university printing services and results from a more-controlled lab study reveal how anyone can print by learning how to specify 3D printing ideas (What to Print) without requiring learning printing as a practice through the direct operation of machinery (How to Print). This analysis of printing services reveal that while anyone can print by collaboratively specifying ideas with printing practitioners, there are many barriers and challenges 3D printing newcomers face before they plan to print. Creation and Utilization of the Big Multimedia 3D Printing Dataset ThingiPano helped inform and facilitate the development of HowDIY, a website to introduce anyone to 3D printing. An evaluation of newcomers utilizing HowDIY revealed future directions for supporting newcomers to 3D Print anywhere online.


Vinayak Krishnamurthy
Vinayak Krishnamurthy ( Assistant Professor • Texas A&M University )
Friday, 11/06/20 at 12:00 PM - 1:00 PM (Central) • Fall 2020

The synthesis of new ideas, or ideation, is fundamental to the product design process, particularly in its early phase. Early design ideation helps designers understand the design problem and the design space. This exploratory nature of ideation demands an uninhibited flow between what a designer is thinking and how the designer is communicating the thought. The challenge in enabling computer-supported ideation is to create a digital environment that augments one’s cognitive capability to search, organize, and synthesize ideas. In this talk, I will tell three short stories each of which attempts to address this challenge in a different manner. My first story will begin with a gesture-based interfaces for creating, modifying, and manipulating 3D shapes. Using insights from an observational study, my second story will then explore how the tacit human understanding of a real-world interactions can be embedded within the virtual interactions for shape deformation. For this, I will describe a geometric algorithm for extracting the grasp and motion from a dynamic point-cloud of the hand interacting with a virtual shape. By applying this algorithm to a virtual pottery scenario, I will demonstrate how users can determine their own strategy for reaching, grasping and deforming a 3D shape without learning a prescribed set of gestures. Finally, in my third story, I will discuss mobile devices as creative media to enable direct creation of 3D shape compositions comprised of swept surfaces.


Min Kyung Lee
Min Kyung Lee ( Assistant Professor • UT Austin )
Friday, 10/30/20 at 12:00 PM - 1:00 PM (Central) • Fall 2020

As artificial intelligence (AI) is transforming work and society, it is ever more important to ensure AI systems are fair and responsible and can gain societal trust. In this talk, I argue that building procedurally-fair and participatory AI is critical to achieving this vision. I will first present empirical findings on people’s experiences with and fairness perceptions of resource allocation algorithms, and considerations for enabling procedural fairness in AI. Then, I will present WeBuildAI, a participatory framework that enables people to build algorithmic policy for their communities. In our case study with a nonprofit, 412 Food Rescue, we applied the framework to a matching algorithm that operates an on-demand food donation transportation service in order to adjudicate equity and efficiency trade-offs.


Maria Kyrarini
Maria Kyrarini ( Postdoctoral Research Fellow • UT Arlington )
Friday, 10/16/20 at 12:00 PM - 1:00 PM (Central) • Fall 2020

Robots have become part of our everyday lives and have several roles, such as helping workers in their work duties, assisting people with impairment in activities of daily living, entertaining and keeping company to children and the elderly, and assisting rehabilitation procedures. In industrial environments, such as assembly lines, a strong level of interaction and cooperation is reached where humans and robots are required to work synergistically on a specific task and have different roles and complementary abilities. However, there is a need on understanding the psychological influence on the human who cooperates with a robot on a daily basis. This talk introduces a real-time framework that assesses the cognitive load of a human while cooperating with a robot to complete a collaborative assembly task. The framework uses multi-modal sensory data from Electrocardiography (ECG) and Electrodermal Activity (EDA) sensors, extracts novel features from the data and utilizes machine learning methodologies to detect high or low cognitive load. The developed framework was evaluated on a collaborative assembly scenario with a user study.


Hedieh Moradi ( M.S. Student • UT Arlington )
Friday, 10/09/20 at 12:00 PM - 1:00 PM (Central) • Fall 2020

Silicone is a transformative design material found within various emerging HCI practices, including shape-changing interfaces, soft robotics, and wearables. However, workflows for designing and fabricating silicone forms require a time-intensive mold-cast-cure pipeline that limits the experiential knowledge gained from working directly with silicone. In this talk, I conduct a material-centric exploration of silicone and develop designerly workflows for creating inflatable silicone bladders. I will describe Siloseam, a creative framework that streamlines a bladder design and fabrication process, collects tacit knowledge involved in recovering from errors and introduces new workflows that reuse existing molds. A set of exemplar artifacts demonstrates an expanded repertoire of silicone forms that leverage various airtight seams' configurations to create playful, haptic interactions. I will discuss the remaining challenges in integrating silicone with a broader range of materials and opportunities for developing designerly workflows for other mold-and-cast processes.


Anubrata Das
Anubrata Das ( Ph.D. Student • UT Austin )
Friday, 10/02/20 at 12:00 PM - 1:00 PM (Central) • Fall 2020

While most user content posted on social media is benign, other content, such as violent or adult imagery, must be detected and blocked. Unfortunately, such detection is difficult to automate, due to high accuracy requirements, costs of errors, and nuanced rules for acceptable content. Consequently, social media platforms today rely on a vast workforce of human moderators. However, mounting evidence suggests that exposure to disturbing content can cause lasting psychological and emotional damage to some moderators. To mitigate such harm, we investigate a set of blur-based moderation interfaces for reducing exposure to disturbing content whilst preserving moderator ability to quickly and accurately flag it. We report experiments with Mechanical Turk workers to measure moderator accuracy, speed, and emotional well-being across six alternative designs. Our key findings show interactive blurring designs can reduce emotional impact without sacrificing moderation accuracy and speed.