International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences
E-ISSN: 2349-7300Impact Factor - 9.907

A Widely Indexed Open Access Peer Reviewed Online Scholarly International Journal

Call for Paper Volume 14 Issue 2 March-April 2026 Submit your research for publication

Systemic Vulnerabilities in Autonomous LLM Agents: A Comprehensive Threat Analysis and Authorization Framework

Authors: SUNIL KARTHIK KOTA

DOI: https://doi.org/10.37082/IJIRMPS.v13.i1.232960

Short DOI: https://doi.org/hbq6wj

Country: United States

Full-text Research PDF File:   View   |   Download


Abstract: The transition from passive Large Language Models (LLMs) to agentic systems— capable of planning, tool usage, and autonomous execution—has introduced a novel and critical attack surface in modern software architecture. While traditional LLM security research has focused largely on alignment and direct jailbreaking, agent-driven systems introduce operational security risks distinct from their generative capabilities. This paper provides a formal threat model for autonomous agents, focusing specifically on cross-agent spoofing, indirect prompt injection, task hijacking, and unsafe tool invocation. We analyze the theoretical limitations of current Role-Based Access Control (RBAC) mechanisms when applied to probabilistic agents and demonstrate how the "Confused Deputy" problem manifests in non-deterministic environments. Furthermore, we propose a theoretical authorization framework, the Context-Aware Agentic Verifiable Policy (CA2P), which integrates cryptographic identity assertions with intent verification to mitigate lateral movement and privilege escalation in multi-agent environments. This analysis relies on established cryptographic principles and existing security literature to map the trajectory of agent security.

Keywords: LLM Security, Autonomous Agents, Indirect Prompt Injection, Cross-Agent Spoofing, Confused Deputy, Access Control, Zero Trust Architecture.


Paper Id: 232960

Published On: 2025-01-03

Published In: Volume 13, Issue 1, January-February 2025

Share this