The main focus of my research is on Tackling Misinformation using Natural Language Processing.

Currently I am working on the “PANACEA: PANdemic Ai Claim vEracity Assessment” project, which aims to create an AI-enabled evidence-driven framework for claim veracity assessment during pandemics. Within the project I focus on (1) collecting COVID-19 related data from social media platforms and authoritative resources, (2) addressing the issues of real-world model applicability through the lense of their generalisability to unseen rumours, and (3) developing novel unsupervised/supervised approaches for veracity assessment by incorporating evidence from external sources.

In my PhD I focused on Rumour Stance and Veracity Classification in social media conversations. Veracity classification means a task of identifying whether a given conversation discusses a True, False or Unverified rumour. Stance classification implies determining the attitude of responses discussing a rumour towards its veracity as either Supporting, Denying, Questioning or Commenting. In my work I study the relations between these tasks, as patterns of support and denial can be indicative of the final veracity label. As the input data is in the form of conversations discussing rumours, I utilise the conversation structure to enhance predictive models. I work with deep learning models as this approach allows flexible architectures and has benefits of representation learning. Recurrent and recursive neural networks allow to model time sequences and/or conversation tree-like structures.

I am also interested in broader area of Online Harms, and tasks like propaganda detection and multimodal hate speech detection.