Since my Ph.D. studies my research interests have been diverse but related. As I Ph.D. student I studied solving the Constraint Satisfaction Problem with Evolutionary Algorithms and other, similar heuristics from the research area of Computational Intelligence. Additionally I compared the performance of those problem solving methods with more classical algorithms designed for the same task, in particularly those based on the Chronological Backtracking paradigm.
After I finished my Ph.D. thesis I expanded my interest into other research areas. In particular I studied ways and mechanisms to distribute Agent-based Simulations. Distributed Computing, enhanced with techniques from Artificial Intelligence became my primary focus while working on the NEW TIES and MWGrid projects. I remained interested in my previous research though, and continued to study, and publish methods for solving the Constraint Satisfaction Problem, although now with a wider array of methods from Artificial Intelligence. I also became interested in other related fields of study: the use of Artificial Intelligence in (serious) games, Virtual Worlds, Virtualisation in Distributed Computing, Peer-to-Peer Networking, Visualisation and Statistical Analysis of experimental results, Parallelism and Artificial Intelligence in general, and, through the NEW TIES and MWGrid projects; Computational Sociology and Computational Archaeology.
The Constraint Satisfaction Problem (CSP) belongs to the constrained problems problem-class. The CSP is a fundamental problem in Artificial Intelligence with both practical and theoretical relevance to Computer Science at large. The theoretical relevance stems from the fact that the CSP is computationally intractable in that they are non-deterministic polynomial complete (NP-complete) and can be directly transformed from the Statisfiability Problem, the root problem proven to be NP. The problem itself is quite straightforward to define: Given a number of variable, with a number of possible values for each variable, a number of constraints are defined over these variables determining which value combinations are allowed, and/or which are not. A solution of a CSP is then a set of values for all variables such that none of these values invalidates any of the defined constraints (i.e., all values are allowed). Finding any solution to the CSP instances is enough. If an additional valuation function over these values is defined, according to which a solution must be valued, the problem is called the Constrained Optimisation Problem (COP). Problems of the CSP or COP type are encountered throughout both daily life and Computer Science theory.
Evolutionary Algorithms (EA), also known as Genetic Algorithms (somewhat incorrectly), are algorithms for solving problems in the Computational Intelligence branch of Artificial Intelligence. They are often grouped within Computational Intelligence along with Fuzzy Logic and Neural Networks and other 'bottom-up' approaches to solving problems. EAs are algorithms based on the evolutionary paradigm, first stated by Charles Darwin. They work by evolving a population of candidate solutions of a problem (called individuals), guided over several iterations by the principles of survival and reproduction of the fittest when evaluated against fitness function designed to match the problem to be solved. Evolution of the individuals is driven by the genetic operators: mutation and crossover. The basic idea is that over time an EA will evolve a population of candidate solutions most suited to the environment defined by the problem. An EA is therefore an bottom-up approximation algorithm for finding the best approximate solution to the problem it is tasked to solve.
Agent-based Simulations (ABS) are a bottom-up approach to simulating an system. They are contrasted against mathematical simulations which look at the system 'from the top downwards', trying to encapsulate general system behaviour in mathematical formulas so as to extrapolate from this description future behaviour. Instead of looking at general behaviour, ABS concentrate on capturing the individual behaviour of all actors in the system, and encapsulating those in entities called agents. Extrapolation of future behaviour of the system then becomes an experimental exercise of placing these agents in a suitable environment and analysing the behaviour of the simulation itself. Whereas mathematical models concentrate on introducing more and more complex and fine-grained formulas for describing the system from the top down, ABS models concentrate on making the agents behave more and more like their system counterparts. Although the later is often easier and more intuitive to do, this is offset by the effort needed to run multiple runs of the simulation (and the analysis of these runs). Mathematical models or simulations often do not require the latter. As such ABS is often non-deterministic, while mathematical models are often deterministic. Having said that, ABS are equally capable of modelling chaotic systems with the full granularity of that system expressed in the experiments (with the possibility of 'surprise results'), whereas mathematical models are often approximate.
The distribution of ABS over multiple computing platforms is a difficult research problem for a number of reasons. Primary problems include: the absence of 'look-ahead' or prediction in the behaviour of the agents, the problem of partitioning the simulation over the multiple computing platforms, and the often inherent interconnectivity of the agents and their behaviour. There are two basic methods for distributing ABS: the Island Model and the Shared State Model. I have researched and implemented distribution engines for both methods, one for the NEW TIES project (Island Model), and one for the MWGrid project (Shared State Model).
The Island Model distributes partitions of the simulation into so-called islands, one for each computing platform. The ABS then becomes the collection of sub-ABSs, with the distribution engine providing access between the sub-ABSs through the network. Should an agent move from one island to another, it too is migrated through the network from one sub-ABSs to another. Overhead of an Island Model distribution engine can become a bottleneck, which is usually dealt with by repartitioning the simulation, i.e., moving one bit of a sub-ABS to another to be dealt with there. There are worst case scenarios in which no suitable partitioning of the simulation can be found.
The Shared State Model separates all shared state of the agents in the simulation from the behavioural logic of the agent to be maintained by the distribution engine. The shared state of an agent are all attributes visible or those attributes with which other other agents can interact. The computing resources available are then partitioned into two different types: logical processes running the simulation (providing static data and calling the agents to act), and logical processes assigned to the distribution engine maintaining the shared state. One computing platform can run several logical processes of any type. The task of the distribution engine is then to provide consistency maintained access to the shared state through an interface. The agents access their and other agent's shared state through read, write, and range-query operations. Because no look-ahead is generally possible, the distribution engine maintains data consistency optimistically by repairing inconsistencies through roll-backs. A roll-back means that an agent is returned to a state before the inconsistency occurred, and it is for this reason that the agents have to remain stateless themselves. Such state private to the agent itself the agent has to (and can!) deal with itself by including time-stamped memory.
To a casual observer these research interests may look separated from each other. They are not. EAs can be seen as a specific incarnation of the general ABS, with the individuals in EAs being analogous to the more general agents in ABS. Overall, a large number of paradigms and methods for analysis are applicable to both. My research into the CSP also helped out when thinking about the various protocols and algorithms required for the distribution engines implemented, even though the constrained problems encountered there were more specific than the general CSP or COP. Equally Artificial Intelligence methods in general have, and have had profound application in the development of those same protocols and algorithms. Lateral thinking based on all areas of research interest has been mutually beneficial to all my areas of research. I would argue this is how it should be.