Tianshi Li
Assistant Professor
I’m an Assistant Professor at Northeastern University in the Khoury College of Computer Sciences, directing the PEACH (Privacy-Enabling AI and Computer-Human interaction) Lab. I’m also a core faculty member at the Cybersecurity and Privacy Institute at Northeastern University.
Come and join us if you’re interested in designing and building human-centered solutions for privacy! My more recent interest focuses on studying and addressing the emerging AI privacy issues from a human-centered perspective. Read my recent papers (1, 2, 3, 4) to learn more. Check out my talk slides that summarize our lab’s recent work on the novel privacy threats and mitigation in LLM-powered interactive systems. Read my research statement to learn more about my doctoral dissertation on developer support for privacy.
I’m looking to hire PhD students starting in Fall 2025 with strong technical backgrounds and/or experiences in human-centered privacy research. Please apply to the Khoury CS PhD program and mention my name if you’re interested in working with me.
My research interests lie at the intersection of Human-Computer Interaction (HCI) and Privacy, with a focus on AI Privacy. I strive to address the increasing privacy issues in today’s digital world using a blend of human-centered problem understanding and technical problem solving. I conduct mixed-methods research to understand the privacy challenges situated in different stakeholders’ lived experiences, and build systems and conduct computational experiments to measure, model, and tackle these human-centered problems.
News
Oct 18, 2024 | Our HCOMP 2024 paper “Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination” won the best paper award! |
---|---|
Sep 26, 2024 | Our PrivacyLens paper has been accepted to the NeurIPS 2024 Track on Datasets and Benchmarks. We introduce a novel framework to benchmark emerging unintended privacy leakage issues in LM agents, which also presents a method for operationalizing the contextual integrity framework with the help of LLMs. Check out our preprint and website to learn more. |
Sep 20, 2024 | Two papers accepted at CSCW 2025! One on secret use of LLMs, and another on ethics of LLM use in HCI research. |
Sep 19, 2024 | I’m grateful to have received a gift grant of $50K from Google for designing human-centered privacy protection in text input methods! I also visited the Gboard team today and gave a talk titled “Navigating Privacy in the Age of LLMs: A Human-Centered Perspective.” Looking forward to more collaborations! |
Jul 17, 2024 | Excited to share our NSF SaTC award on “Empathy-Based Privacy Education and Design through Synthetic Persona Data Generation” ($600K in total, $200K personal share). This grant is in collaboration with Prof. Toby Li (Notre Dame) and Prof. Yaxing Yao (Virginia Tech). |
Feb 1, 2024 | Our Special Integret Group proposal titled “Human-Centered Privacy Research in the Age of Large Language Models” is accepted at CHI’24! Excited to meet people from diverse backgrounds at CHI! |
Jan 19, 2024 | Two papers accepted at CHI’24! One on how LLMs may invade users’ privacy, and another on how LLMs may empower people to build better multimodal apps! |
Jan 12, 2024 | Our paper on the Matcha IDE plugin is accepted at IMWUT! Try out the tool. |
May 3, 2023 | Job search update: I’ll be joining the Khoury College of Computer Sciences at Northeastern University as an Assistant Professor in Fall 2024! Before that, I’ll spend a year in the Bay Area, working at Google Checks on privacy compliance intelligence and at Berkeley as a postdoc. Looking forward to the journey ahead! |
Apr 20, 2023 | Our paper on the COVID-19 contact-tracing app adoption problem won a Best Research Papers 2019-2021 Award at Pervasive and Mobile Computing! |