Virginia Tech

Networks and Systems Security Lab


AI/ML Security

The rise of artificial intelligence, particularly in the form of large language models, generative AI, and federated learning, has transformed modern computing—but it has also introduced a new set of vulnerabilities and security challenges. As AI systems become increasingly embedded in critical decision-making pipelines—from healthcare and infrastructure to autonomous systems—their robustness, trustworthiness, and explainability have become essential concerns for both academia and industry.

Our group's research in AI/ML Security addresses these challenges by exploring the intersection of advanced machine learning and adversarial resilience. We study how learning systems can be attacked, manipulated, or deceived—and more importantly, how they can be protected. This includes examining the trust boundaries in federated learning environments, defending AI models from adversarial examples and data poisoning, and improving the reliability and transparency of complex AI systems such as transformers and large models.

We focus on building explainable and resilient AI architectures that not only defend against attacks but also maintain operational robustness in uncertain, data-constrained, or adversarial conditions. Our research contributes both defensive innovations and offensive insights—revealing attack vectors that stress-test AI deployments in real-world scenarios. In doing so, we help define new benchmarks and security practices that are essential for the safe integration of AI into high-stakes environments.

By bridging reliability engineering, secure learning, and explainable AI, our work informs the next generation of secure-by-design AI systems, ensuring they are not only powerful but also accountable, transparent, and resilient to manipulation. This has meaningful implications for the deployment of trustworthy AI in industry and policy as well as for advancing foundational understanding in the academic research community.


Related Publications

ViTGuard: Attention-aware Detection against Adversarial Examples for Vision Transformer

Authors: S Sun, K Nwodo, S Sugrim, A Stavrou, H Wang

Published in: arXiv preprint arXiv:2409.13828


Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning

Authors: S Sun, S Sugrim, A Stavrou, H Wang

Published in: IEEE Transactions on Information Forensics and Security