
- This event has passed.
Explainable AI for Malware Analysis by Mohd Saqib
August 11 @ 12:00 pm - 1:00 pm
Title: Explainable AI for Malware Analysis
Abstract: In recent years, explainable artificial intelligence (XAI) has become a critical component of ensuring transparency and trust in machine learning systems, particularly in high-stakes domains like cybersecurity. This talk will begin with a basic introduction to XAI, highlighting its importance in understanding model decisions, especially in the context of malware detection. I will then introduce GAGE (Genetic Algorithm-based Graph Explainer), a novel framework specifically designed for malware analysis. GAGE utilizes graph-based representations of malware features and applies a genetic algorithm to generate meaningful explanations for model predictions. This approach allows for both global and local interpretability of malware detection models, making it easier for security professionals to understand how malware is identified and how detection decisions are made. The presentation will cover the theoretical foundations, implementation details, and experimental results of GAGE, showcasing its potential to enhance trust and efficacy in automated malware detection systems.
Brief Bio: Dr. Mohd Saqib is a researcher and scholar specializing in Explainable AI (XAI), machine learning, and cybersecurity. He completed his Ph.D. at McGill University, where his research focused on developing interpretable models for malware analysis in collaboration with Defence Research and Development Canada (DRDC). Dr. Saqib also holds an M.Tech in Data Analytics from Indian Institute of Technology (ISM) Dhanbad. He has authored several Q1 journal papers, including a comprehensive analysis of XAI for malware hunting published in ACM Computing Surveys (IF 23.8). Dr. Saqib has filed four U.S. patents in AI-related technologies during his collaborations with BlackBerry and Zayed University. In addition to his research, Dr. Saqib has been a Teaching Assistant (TA) for cybersecurity, hacking, and AI courses at McGill University. He was honored with the Graduate Excellence Award at McGill and received the prestigious FRQNTscholarship. His expertise spans AI model explainability, malware detection, and the intersection of AI with critical infrastructure.