Within the IntelligentSecurityLab, we model, analyze and solve a wide range of problems in which secure decisions must be made in noisy and adversarial environments. We utilize rigorous tools from decision, control, optimization and game theory to model systems and utilize Machine Learning to obtain potent defense (and attack) strategies.
Applying ML and stochastic control to create adaptive security defense strategies.
Exposing vulnerabilities and identifying stealthy attack vectors on systems.
Building proof-of-concept libraries and applications for automated decision making.
This project applies game theory and deep reinforcement learning to protect open-source software from supply-chain attacks hidden in pull requests. By modeling the interaction between attackers and maintainers as general-sum games, the project develops potent strategies that optimize pull request assignments to prevent malicious code injection.
This research advances power grid resiliency by developing Graph Neural Networks to predict cascading failures and modeling Hybrid False Data Injection Attacks (FDIA) against third-party aggregators. By combining topological failure analysis with the study of adversarial data manipulation, the framework identifies critical vulnerabilities and evaluates tradeoffs between safety margins, energy storage, and operational stability.
We focus on protecting Cyber-Physical systems within various applications such as intelligent transportation systems, manufacturing and multi-agent systems. Our approach relies on game-theoretic models that we solve using reinforcement learning methods.
NSF CAREER NSF SaTC
In this project we study the susceptibility of dynamic channel allocation methods -- commonly used in Software Defined Networks (SDN) -- to stealthy jamming attacks. Through Markov Decision Process and game-theoretic frameworks we were able to expose potent attacks and develop defense mechanisms that mitigate their impact.
Security and privacy in cloud computing are critical components for various organizations that depend on the cloud in their daily operations. Customers’ data and the organizations’ proprietary information have been subject to various attacks in the past. In this project, we develop a set of Moving Target Defense (MTD) strategies that randomize the location of the Virtual Machines (VMs) to harden the cloud against attacks.
AFRL
Clesstian Matala (LREU - U. of Houston), Darryl Balderas (IBM), Vu Nguyen (GM), Michael Ruth (REU - U. Buffalo), Brandon Van Slyke (IBM), Seth Richter (REU - LeTourneau), Agustin Rivera (General Motors), Sheryl Rosenthal (REU - Texas State), Daniel Haller (REU - U. Maryland), Alison Johnson (REU - Texas State), David Reynolds (Microsoft), William Johnson, Nikhil Halkude, Hether Hinze (REU - Texas State), Joseph Valdez, Bassam El Lababedi, Charles Tewksbury, Christopher Page, Christian McArthur, Jason Valdez, Hideo Goto, Joshua Tharp.
We look forward to partnering with industry, local, and federal agencies on mutually-interesting research-oriented projects geared towards cybersecurity, optimization and data analytics.
We are seeking motivated students to join our lab.