I study how AI agent systems fail under adversarial conditions. Most AI security research targets the model. I focus on everything around it.
I have produced 10+ CVEs across OSS AI agent frameworks covering sandbox escapes, shell injection, SSRF, authentication bypass, and arbitrary code execution. My empirical work includes the first controlled comparison of indirect prompt injection and instruction hierarchy violations across LangChain, MCP, and a raw API baseline, 3,274 trials and 42 attack payloads, finding that task framing drives vulnerability independent of framework architecture. Current research focuses on turn depth degradation and secure tool call behavior in deployed agents.
B.S. Cybersecurity, University of South Florida. Joining Wells Fargo as a GCP Security Engineer August 2026. HTB OASE candidate. Background in offensive, cloud, and application security through Wells Fargo, JPMorgan, Accenture, and Honeywell.
I box. In college I founded USF's boxing club, the largest combat sports club in school history, 1,200 members, back-to-back Outstanding Sports Club of the Year.