Secure AI for Cybersecurity

sAIfer Lab investigates the weaknesses and potential mitigations to adversarial attacks against cybersecurity detectors.

Pra Lab and SmartLab both work to improve the security of AI and to enhance Cybersecurity through AI.
Machine learning technologies are now widely included into Cybersecurity applications that prevent the spread of threats like malicious programs and mobile applications. However, also these systems can be the target of adversarial attacks that mine their performance at training and test time.

To test their performance it is necessary to bridge the abstract methodologies built on simpler tasks, like image detection and classification, to this complex domain. In particular, testing strategies must take into account that programs, applications, and all the objects treated in the Cybersecurity domain must be compliant to strict file formats, and these objects must still be valid and executable even after being manipulated.

Thus, we study novel attack strategies that can be applied in cybersecurity contexts, without breaking the functionality of the perturbed samples to mimic the presence of realistic attacks. This research line is helping us shape the next-generation testing techniques that will be used to assess the robustness of detectors before being placed in production.
Also, these techniques grant us the ability to study how to develop and harden classifiers that are secure against these adversarial attacks, placing safer products on the market.

Active research projects

sAIfer Lab

Quick Links

Contact Us

PRA LAB:
Via Marengo, 3 - 09123, Cagliari - Italy

SMARTLAB:
Via Opera Pia 11A, 16145, Genoa - Italy

Affiliations