YWCC Research Earns Best Paper Awards at ACM CHI 2026
Several Ying Wu College of Computing (YWCC) faculty members in the Department of Informatics and their student collaborators presented award-winning research papers at the ACM CHI 2026 conference in Barcelona, Spain during the month of April. The ACM (Association for Computing Machinery) CHI conference on Human Factors in Computing is the leading international gathering on Human-Computer Interaction (HCI) and serves as the premiere global forum for presenting research, innovations, and practical applications, covering topics from design and usability systems, qualitative and quantitative studies, and emerging technologies.
Four YWCC papers were among 1,703 admitted out of 6,730 submissions for the highly selective conference which has an average 25% acceptance rate.
Ramprabu Thangaraj, a third-year Ph.D. in Information Systems candidate, was the lead author of a paper titled “Crafting Remembrance Beyond the Self: Older Adults’ Digital and Material Legacies.” He, along with his collaborators, including his advisor, Assistant Professor Alisha Pradhan, received the Best Paper Award – typically given to the top 1% of the submissions.
According to Pradhan, when people think about technologies for aging, they often imagine reminder systems, health monitoring tools, or assistive devices. But aging is not only about managing decline or receiving care. In this work, the research team broadens what may count as technology for aging by centering research on a topic that many older adults care deeply about at a reflective stage of life: end-of-life planning, how they want to be remembered post-mortem, by whom, and through what kinds of physical and digital artifacts.
The work drew on interviews with older adults to gain insight into how individuals think about remembrance after death, including who is remembered (subject) and who remembers (audience), and how physical, temporal, or relational traces captured in artifacts mediate remembrance. The findings offer design implications for future custom legacy crafting systems that accommodate diverse audiences (next-of-kin, colleagues, students, communities, et al.), as well as diverse artifacts (physical and digital).
“As older adults reflect on the memories they’ll pass on, their relationship to self and legacy can shift – It becomes less about them and more about the people they leave behind,” said Thangaraj.
“Like, Comments & Caption: A Decade of Social Media Video Caption Research (2015-2025),” authored by Huong Nguyen, a first-year Ph.D. in Information Systems candidate, along with collaborators, including Assistant Professors Mark Cartwright and Sooyeon Lee, received an Honorable Mention Award for research concerning social media captioning for Deaf and Hard of Hearing (DHH), neurodivergent and multilingual viewers.
According to the study, which included 36 peer-reviewed papers published between 2015 and 2025, video as a dominant mode of content on platforms such as YouTube, TikTok and Instagram has thus far faced the risk of overlooking core platform constraints or critical accessibility needs that could be addressed through direct feedback from the viewers or creators they serve. Building on the insights, the researchers propose the framework of Participatory Captioning and suggest design implications that highlight future research directions for social media video captioning.
The proliferation of AI-generated, deepfake sex images of high school students and, in some cases, teachers inspired second-year Ph.D. in Information Systems candidate Tongxin (Lily) Li, her advisor, Associate Professor Donghee (Yvette) Wohn, and their collaborators to investigate “Understanding Educators’ Perceptions of AI-Generated Non-Consensual Intimate Imagery.”
The advancement of AI tools to inappropriately alter images of fellow students and teachers has manifested an emerging social problem that middle and high school administration identify as critical but are lacking in systematic policies and training to address due to a lack of resources, unclear legal boundaries, and limited knowledge of AI.
The research team conducted an interview study with 20 U.S. educators to gather data on their attitudes, experiences and practices related to AI-generated non-consensual intimate imagery (AIG-NCII). Participants expressed concerns about both students’ and their own vulnerability, which has the potential to affect moral decline in targeted individuals.
The findings of the paper contribute to interactive educational tool design, curriculum design and policy making, especially regarding the need for multi-stakeholder strategies that will effectively address associated issues.
Second year Ph.D. in Information Systems candidate Shiva Mayahi, advised by Assistant Professor Nathan Malkin, studied how mobile app developers approach creating privacy policies in their paper titled “Tinker, Tailor, Trust: How Developers Create Privacy Policies With and Without AI.”
Despite the complexity of the process, little was known about how privacy policies are created and validated. Mayahi and the research team interviewed 20 developers across five regions and observed them building a privacy policy live using an LLM. The study found that many developers will ask an LLM to write a privacy policy for them without a complete understanding of their apps’ behaviors, including those of embedded third-party software development kits (SDKs). As a result, many developers rely on submitting to app stores to determine if their privacy policies are adequate. The problem, according to Mayahi and Malkin, is that neither Google Play nor the App Store are known to check the content of privacy policies.
“All of this suggests that, until a privacy policy has been verified, we need to treat it like any other (potentially) vibe-coded artifact: with suspicion,” the team wrote.