Planning for attacker entrapment in adversarial settings
Brittany Cates, Anagha Kulkarni, Sarath Sreedharan
We propose a planning framework to generate a defense strategy against an attacker who is working in an environment where a defender can operate without the attacker’s knowledge. The objective of the defender is to covertly guide the attacker to a trap state from which the attacker cannot achieve their goal. Further, the defender is constrained to achieve its goal within K number of steps. This K is calculated as a pessimistic lower bound within which the attacker is unlikely to suspect a threat in the environment.
Such a defense strategy is highly useful in real-world systems like honeypots or honeynets, where an unsuspecting attacker interacts with a simulated production system while assuming it is the actual production system. Typically, the interaction between an attacker and a defender is captured using game theoretic frameworks. Our problem formulation allows us to capture it as a much simpler infinite-horizon discounted Markov Decision Process (MDP), in which the optimal policy for the MDP represents the defender’s strategy against the actions of the attacker. Through empirical evaluation, we show the merits of our problem formulation.
Full paper: Planning for attacker entrapment in adversarial settings