Abstract
This systematic literature review examines the evolving landscape of deception in video games and artificial intelligence (AI). The integration of deceptive strategies in AI, particularly within gaming environments, represents a growing area of interest with significant implications for both gameplay and broader applications, such as cybersecurity. Through a systematic review of 97 papers, 79 were excluded after introduction analysis revealed focus on deception outside gaming contexts (e.g., advertising, propaganda, movement detection), leaving 18 papers directly applicable to game-based deception. Of these 18, 61% provided formal or contextual definitions while 39% relied on assumed understanding. The review categorizes the current body of research into three primary areas: definitions of deception, methods for implementing and mitigating deception, and the frameworks used to analyze these strategies. The review highlights the diversity in the conceptualization of deception, ranging from formal definitions grounded in game theory, to more context-specific operational definitions. Key models such as signaling games (information asymmetry scenarios), Stackelberg games (leader–follower dynamics), and hypergames (perception-based interactions) are explored alongside AI-driven approaches like reinforcement learning (trial-and-error learning) and generative neural networks, which simulate and detect deception in complex environments. The review identifies significant gaps in the standardization of definitions and the practical implementation of deceptive strategies, calling for further interdisciplinary research to address these challenges. The ethical implications of deploying deceptive AI systems are discussed, emphasizing the need for comprehensive frameworks that balance innovation with responsible usage. Future research must prioritize the standardized definitions and interdisciplinary collaboration across ethics, law, and social sciences to address the expanding applications and ethical implications of deceptive AI technologies.