BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Date iCal//NONSGML kigkonsult.se iCalcreator 2.20.2//
METHOD:PUBLISH
X-WR-CALNAME;VALUE=TEXT:Eventi DIAG
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
DTSTART:20241027T030000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RDATE:20251026T030000
TZNAME:CET
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20250330T020000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:calendar.29339.field_data.0@www.ugov-ricerca.uniroma1.it
DTSTAMP:20260404T190713Z
CREATED:20250417T074128Z
DESCRIPTION:Abstract:Fine-tuning large language models (LLMs) on sensitive 
 or private data poses significant privacy and security challenges. In this
  talk\, I will explore how Federated Learning (FL) can be used to fine-tun
 e LLMs without jeopardizing data privacy. I will discuss how the FL paradi
 gm enables training advanced models on distributed data while keeping the 
 data local\, thus minimizing the risks associated with sharing sensitive i
 nformation. A major focus will be on parameter fine-tuning techniques that
  enable efficient model fitting with minimal communication overhead. I wil
 l also discuss the challenges of integrating such techniques into FL syste
 ms and their impact on model performance\, privacy and computational effic
 iency. This approach is particularly relevant in sensitive areas such as h
 ealthcare\, where data privacy is of paramount importance.Bio:Dr. Marco Fi
 sichella is a distinguished researcher in the field of AI\, specializing i
 n clustering\, federated learning (FL)\, fairness\, and security. His work
  focuses on building trustworthy AI systems\, with contributions published
  in leading conferences and journals. As a member of the European Laborato
 ry for Learning and Intelligent Systems (ELLIS)\, he collaborates with top
  AI researchers to advance AI research in Europe.Currently\, Marco is invo
 lved in several impactful projects\, including CAIMed\, which focuses on A
 I and causal methods in medicine\, and the FEDCOV project\, which addresse
 s privacy-preserving FL for COVID-19 data analysis. He also serves as Chie
 f Scientist at the Trustworthy AI Lab at L3S Research Center\, where he le
 ads efforts in privacy\, fairness\, and interpretability in AI systems.Pre
 viously\, Marco worked as Director of Research and Development at the Otto
  Group\, applying AI methods to real-world problems such as online fraud d
 etection and cybersecurity. His transition back to academia as a group lea
 der reflects his commitment to advancing the theoretical foundations of tr
 ustworthy AI while addressing practical applications.
DTSTART;TZID=Europe/Paris:20250513T150000
DTEND;TZID=Europe/Paris:20250513T163000
LAST-MODIFIED:20250417T075040Z
LOCATION:Aula Magna del DIAG
SUMMARY:Talk 'Federated Fine-tuning of LLMs with Private Data' - Dr. Marco 
 Fisichella (L3S Research Center Leibniz University Hannover)
URL;TYPE=URI:http://www.ugov-ricerca.uniroma1.it/node/29339
END:VEVENT
END:VCALENDAR
