An Immune Inspired Algorithm for Learning Strategies in a Pursuit-Evasion Game

An Immune Inspired Algorithm for Learning Strategies in a Pursuit-Evasion Game

Malgorzata Lucinska, Slawomir T. Wierzchon
Copyright: © 2012 |Pages: 23
DOI: 10.4018/978-1-60960-818-7.ch503
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multi-agent systems (MAS), consist of a number of autonomous agents, which interact with one-another. To make such interactions successful, they will require the ability to cooperate, coordinate, and negotiate with each other. From a theoretical point of view such systems require a hybrid approach involving game theory, artificial intelligence, and distributed programming. On the other hand, biology offers a number of inspirations showing how these interactions are effectively realized in real world situations. Swarm organizations, like ant colonies or bird flocks, provide a spectrum of metaphors offering interesting models of collective problem solving. Immune system, involving complex relationships among antigens and antibodies, is another example of a multi-agent and swarm system. In this chapter an application of so-called clonal selection algorithm, inspired by the real mechanism of immune response, is proposed to solve the problem of learning strategies in the pursuit-evasion problem.
Chapter Preview
Top

Introduction

Multi-agent system problems involve several agents attempting, through their interaction, to jointly solve given tasks. The central issue in such systems is an agent conjecture about the other agents and their ability to adapt to their teammates’ behavior. Due to the interactions among the agents, the problem complexity can rise rapidly with the number of agents or their behavioral sophistication. Moreover, as all the agents are trying to find simultaneously the optimal strategy, the environment is no longer stationary. Also in real-world systems it is necessary to address agents’ limitations, which make them not always being capable of acting rationally. To sum up, scalability, adaptive dynamics and incomplete information are the most challenging topics, which have to be coped with by any techniques applied to multi-agent encounters.

Game theory is already an established and profound theoretical framework for studying interactions between agents (Rosenschein, 1985). Although originally designed for modeling economical systems, game theory has developed into an independent field with solid mathematical foundations and many applications. It tries to understand the behavior of interacting agents by looking at the relationships between them and predicting their optimal decisions. Game theory offers powerful tool, however the issues of incomplete information and large state spaces are still hard to overcome.

Artificial immune systems (AIS) are computational systems inspired by theoretical immunology, observed immune functions, principles and mechanisms in order to solve problems (de Castro & Timmis, 2002). The fundamental features of the natural immune system, like distribution, adaptability, learning from experience, complexity, communication, and coordination have decided that immune algorithms have been applied to a wide variety of tasks, including optimization, computer security, pattern recognition, mortgage fraud detection, aircraft control etc. – consult (de Castro & Timmis, 2002) for details.

The above mentioned features indicate a natural parallel between the immune and multi-agent systems, which suggests that immune metaphor constitutes a compelling model for agents’ behavior arbitration. Examples of successful utilization of immune metaphor to multi-agent systems include works of Lee, Jun, & Sim (1999), Sathyanath & Sahin (2002), and Singh & Thayer (2002), to name but a few. In the chapter an original algorithm MISA (Multi-Agent Immune System Algorithm) is proposed for a multi-agent contest (Lucińska & Wierzchoń, 2007). Solutions presented in the above mentioned papers as well as the MISA algorithm perform better than traditional techniques (i.e. dynamic programming, reinforcement learning, etc.) for given problem domains. Despite promising results in a wide range of different applications there remain however many open issues in the field of AIS. Being a relatively new perspective, they are deficient in theoretical work to study their dynamic behavior in order to explain the results obtained by computational models.

Complete Chapter List

Search this Book:
Reset