Welcome!

Hi! This is Elena, Ph.D. Student in Data Science in the University of Rome Tor Vergata and I am part of the Human Centric ART group.

My research revolves around Trustworthy AI principles, applied to language models. I have been working expecially on robustness, security, interpretability, and fairness. Recently, I have been focusing on studying and mitigating privacy risks in LLMs.

Latest and Highlights

ACL 2025 has been a blast! An exiciting conference, interesting papers and great chance to exchange ideas in a vibrant enviroment.

I am proud to say our model editing tecnique Private Memorization Editing (PME) captured the attention of those interested in efficient and precise edit of unwanted behaviours in LLMs! In the paper, we propose a model editing to tackle privacy issues, guided by a precise knowledge of the training data: privacy of data owners is preserved, without an impact on model utility.

Elena Sofia Ruzzetti

  • (2022 – today) Ph.D. Data Science, Rome Tor Vergata
  • (2020–2022) MSc Computer Science, Rome Tor Vergata

News

๐Ÿš€ June 30, 2025

Fabio Massimo Zanzottoโ€™s and I give a tutorial at IJCNN 2025! We will discuss memorization in Transformer-based Large Language Models and more.

๐ŸŽ‰ May, 2025

2 Papers accepted at ACL 2025! Check out here the papers:

Reach out in Vienna to have a chat! ๐Ÿ˜„ Excited to see you there!

... see all News