Illinois Tech researcher wins best paper award for graph AI security breakthrough

A study into backdoor threats targeting federated graph learning earns top honors at ACM’s cybersecurity conference.

Photo credit: Illinois Institute of Technology

A research team led by Illinois Institute of Technology has been recognized with a Best Paper Award at the Association of Computing Machinery (ACM) Conference on Computer and Communications Security.

Their paper examines the security limitations of federated graph learning (FedGL), presenting a new form of backdoor attack and a mathematically proven method of defense.

Illinois Tech, a research-focused university in Chicago, conducts work across computing, engineering, and data science disciplines. This latest contribution explores how FedGL, used to collaboratively train models on graph data without sharing private information—can be compromised through manipulated inputs during training.

Targeting structural gaps in collaborative AI

The study introduces an attack method called optimized distributed graph backdoor attack (Opt-GDBA). The technique embeds hidden triggers into graph data that influence the model’s behavior during and after training. The attack adapts to variations in data structures, node features, and user-specific patterns, achieving a 90 percent success rate across multiple datasets.

Assistant Professor of Computer Science Binghui Wang, who led the project, says, “The Opt-GDBA is an optimized and learnable attack that considers all aspects of FedGL, including the graph data’s structure, the node features, and the unique clients’ information.”

Defensive framework tested for resilience and accuracy

Alongside the attack, the team proposed a defense strategy designed to detect and reject malicious inputs. The defense breaks graph data into components and uses a mathematical certification process to assess whether any segment poses a threat. In tests, the method successfully intercepted every backdoor attempt while retaining over 90 percent of legitimate model performance.

Wang notes the challenge of designing protection that withstands future attack types. “The most significant challenge was developing a provable defense robust against both known attacks and future unknown threats capable of arbitrarily manipulating graph data,” he says.

The approach builds on interdisciplinary techniques, including statistical modeling and graph topology analysis, and draws from the team’s previous work on verified AI defenses.

Research collaboration across institutions

The authors of the paper include Yuxin Yang, a Ph.D. student affiliated with both Illinois Tech and Jilin University; Qiang Li, professor of computer science at Jilin University; Jinyuan Jia, assistant professor at Pennsylvania State University; and Yuan Hong, associate professor at the University of Connecticut and a former Illinois Tech faculty member.

Binghui Wang says, “What excites me most about this project is how it masterfully bridges the gap between deep theoretical rigor and practical accessibility. The provable defense mechanism is both elegant in its mathematical foundation and effective in real-world applications—while remaining comprehensible to the general public. It represents a rare and valuable achievement in AI security research.”

Previous
Previous

WMG, University of Warwick’s Professor Siddartha Khastgir added to CCAV expert advisory panel

Next
Next

Cambridge researchers secure most European Research Council Advanced Grants in latest funding round