Adaptive regret minimization for learning complex team-based tactics

Duong D. Nguyen, Arvind Rajagopalan, Jijoong Kim, Cheng Chew Lim

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
3 Downloads (Pure)


This paper presents an approach and analysis for performing decentralized cooperative control of a team of decoys to achieve the Honeypot Ambush tactic. In this tactic, the threats are successfully lured into a designated region where they can be easily defeated. The decoys learn to cooperate by incorporating a game-theory-based online-learning method, known as regret minimization, to maximize the team’s global reward. The decoy agents are assumed to have physical limitations and to be subject to certain stringent range constraints required for deceiving the networked threats. By employing an efficient coordination mechanism, the agents learn to be less greedy and allow weaker agents to catch up on their rewards to improve team performance. Such a coordination solution corresponds to achieving convergence to coarse correlated equilibrium. The numerical results verify the effectiveness of the proposed solution to achieve a global satisfaction outcome and to adapt to a wide spectrum of scenarios.

Original languageEnglish
Article number08769909
Pages (from-to)103019-103030
Number of pages12
JournalIEEE Access
Publication statusPublished - 2019
Externally publishedYes


  • Cooperative control
  • Multi-agent system
  • Online learning
  • Regret minimization


Dive into the research topics of 'Adaptive regret minimization for learning complex team-based tactics'. Together they form a unique fingerprint.

Cite this