Tianshi Li
Assistant Professor

I’m an Assistant Professor at Northeastern University in the Khoury College of Computer Sciences, directing the PEACH (Privacy-Enabling AI and Computer-Human interaction) Lab. I’m also a core faculty member at the Cybersecurity and Privacy Institute at Northeastern University.
Come and join us if you’re interested in designing and building human-centered solutions for privacy! My more recent interest focuses on studying and addressing the emerging LLM privacy issues from a human-centered perspective. Read my recent papers (1, 2, 3, 4, 5, 6) to learn more. Check out my recent talks (1, 2) for a quicker overview of our vision and research agenda.
My research interests lie at the intersection of Human-Computer Interaction (HCI), Privacy, and AI. I strive to address the increasing privacy issues in today’s digital world using a blend of human-centered problem understanding and technical problem solving. I conduct mixed-methods research to understand the privacy challenges situated in different stakeholders’ lived experiences, and build systems and conduct computational experiments to measure, model, and tackle these human-centered problems.
News
Apr 30, 2025 | HAIPS 2025 CfP is out! Very excited to co-chair the 1st Workshop on Human-Centered AI Privacy and Security at CCS 2025 in Taiwan w/ Toby Li, Yaxing Yao, and Sauvik Das! Join us by submitting your new or published work to explore the current “hypes” at the intersection of HCI, AI, and S&P. |
---|---|
Jan 17, 2025 | Two papers accepted at CHI 2025! See you in Yokohama! |
Oct 18, 2024 | Our HCOMP 2024 paper “Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination” won the best paper award! |
Sep 26, 2024 | Our PrivacyLens paper has been accepted to the NeurIPS 2024 Track on Datasets and Benchmarks. We introduce a novel framework to benchmark emerging unintended privacy leakage issues in LM agents, which also presents a method for operationalizing the contextual integrity framework with the help of LLMs. Check out our preprint and website to learn more. |
Sep 20, 2024 | Two papers accepted at CSCW 2025! One on secret use of LLMs, and another on ethics of LLM use in HCI research. |
Sep 19, 2024 | I’m grateful to have received a gift grant of $50K from Google for designing human-centered privacy protection in text input methods! I also visited the Gboard team today and gave a talk titled “Navigating Privacy in the Age of LLMs: A Human-Centered Perspective.” Looking forward to more collaborations! |
Jul 17, 2024 | Excited to share our NSF SaTC award on “Empathy-Based Privacy Education and Design through Synthetic Persona Data Generation” ($600K in total, $200K personal share). This grant is in collaboration with Prof. Toby Li (Notre Dame) and Prof. Yaxing Yao (Virginia Tech). |
Feb 1, 2024 | Our Special Integret Group proposal titled “Human-Centered Privacy Research in the Age of Large Language Models” is accepted at CHI’24! Excited to meet people from diverse backgrounds at CHI! |
Jan 19, 2024 | Two papers accepted at CHI’24! One on how LLMs may invade users’ privacy, and another on how LLMs may empower people to build better multimodal apps! |
Jan 12, 2024 | Our paper on the Matcha IDE plugin is accepted at IMWUT! Try out the tool. |