09/Apr/2025 14:00 - 16:00
Povo 2, atrio
Ph.D. Poster Session
I dottorandi dei corsi di Dottorato presenteranno i loro progetti di ricerca e risponderanno alle domande dei partecipanti.
- Doctoral Program in Information Engineering and Computer Science – IECS
- Doctoral Program in Industrial Innovation – IID
- National PhD in Artificial Intelligence – AI
1 – Alghisi Simone IECS
DyKnow: Dynamically Verifying Time-Sensitive Factual Knowledge in LLMs
3 – Becker Brum Henrique IECS
Mitigating NIDS Saturation through Packet Pre-Filtering using PDP devices
4 – Bertolazzi Leonardo IECS
The Validation Gap: A Mechanistic Analysis of How Language Models Compute Arithmetic but Fail to Validate It
5 – Bortolotti Samuele IECS
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
7 – Breschi Lorenzo IID
Enabling Fundamental Kernels for Ensemble Climate Simulations in the Era of AI Accelerators
8 – Camporese Maria IECS
Using ML filters to help automated vulnerability repairs: when it helps and when it doesn’t
9 – Casagranda Gioele & Vallero Marzio IECS
A Joint Effort for Solving Radiation Damage in Superconducting Quantum Computers
11 – Dall’Asen Nicola AI
Retrieval-enriched zero-shot image classification in low-resource domains
12 – De Luca Vincenzo Marco IECS
Boost Human-Machine Teams performance through Tempo-Relational World Representation
14 – Lekeufack Foulefack Rosmael Zidane IECS
Domain Knowledge enhanced vulnerability Detection in Source Code
16 – Nardi Davide IECS
An Anatomy-Aware Shared Control Approach for Assisted Teleoperation of Lung Ultrasound Examinations
17 – Paramitha Ranindya IECS
On the acceptance by code reviewers of candidate security patches suggested by Automated Program Repair tools
18 – Roccabruna Gabriel IECS
Will LLMs Replace the Encoder-Only Models in Temporal Relation Classification?
19 – Shu Yan IECS
OmniGeo: Interactive Vision Language Models for Multi-Granularity, Multi-Sensor and Multi-Scale Earth Observation
21 – Vincze Mátyás IECS
SMoSE: Sparse Mixture of Shallow Experts for Interpretable Reinforcement Learning in Continuous Control Tasks