Illinois Tech researchers introduce socio-technical approach to AI threat modeling
Researchers from Illinois Tech have helped to create a comprehensive framework that assesses the potential threats presented by AI.
Published in the International Journal of Intelligent Information Technologies, the research will help developers, engineers, and users preemptively identify unintended consequences of AI systems.
The paper provides a methodology for identifying and addressing AI threats proactively throughout the lifecycle of the system. Threats could include biased outcomes, privacy violations, psychological harm, mass surveillance, or the creation of environmental threats.
Ann Rangarajan, Assistant Professor of Information Technology and Management, and Saran Ghatak, Professor and Chair of the Department of Humanities, Arts, and Social Sciences, worked with external collaborators on the research.
“AI systems operate within complex social, organizational, and cultural contexts that fundamentally shape how risks emerge,” Ghatak comments. “STRIFE recognizes that threats to AI systems often originate not from technical failures alone, but from the broader ecosystem of human users, institutional policies, and societal expectations."
The framework brings together computer scientists, social scientists, ethicists, and legal scholars to enable interdisciplinary research.
“While the NIST AI Risk Management Framework provides essential guidance for trustworthy AI, practitioners often struggle with how to implement such principles in specific contexts,” Rangarajan adds.
“Our framework systematically guides threat identification across technical dimensions such as safety and transparency, ethical considerations including trust and inclusion, and legal factors such as reasonableness and intellectual property, because AI risks emerge from the complex interactions between technology, human behavior, and societal structures.”