• Keine Ergebnisse gefunden

Influence of Other Research Areas on ABM

Im Dokument Agent-Based Modeling (Seite 69-73)

Markov modeling using Markov decision processes. These models are based on mathematical expressions of Turing machine models. The algo-rithm involves executing a number of rules, encoded on a symbol string.

Markov models can be expressed as chains containing stochastic pro-cesses whose states change with time. These state changes carry con-ditional probabilities associated with them. The future states are inde-pendent of past states.

Markov decision processes use a reward function attached with Markov chains. For every transition, the state receives a reward that affects the transition probability of the state. Using reinforcement learning, these systems are useful in dynamic programming problems and training problems such as using unobservable states in hidden Markov models.

Neural networks. These are recreate biological structures of the neuron ac-tivity in organisms. The various nodes are connected to each other, with each connection carrying weights for the path to process data. Neural networks can be trained using real data. The simulated data can then be verified if it produces similar results.

Mechanism design (MD). Parkes [146] described mechanism design as a problem for designing a protocol, distributing and implementing partic-ular objectives of self-interested individual agents. An agent makes a de-cision respecting other agents, based on its own private information and behaves selfishly. The Economics Nobel Prize for 2007 was presented to the Mechanism Design Theory [93]. It follows the “Hayek theory of catal-laxy where ‘self-organizing system of voluntary co-operation’ is brought about as market progress”. However, there is criticism to the theory, stating that if MD were used to design markets, some agents still end up monopolizing markets.

Gaussian adaptation. Evolutionary algorithms designed for stochastic adaptive processes take more than one attribute into consideration. The number of samples is denoted by N dimensional vectors to represent multivariate Gaussian distribution.

Learning classifier systems. These LCS use reinforcement learning and genetic algorithms. The rules can be updated using reinforcement learn-ing, allowing different strategies to be chosen.

• Pittsburgh-type LCS - population of separate rule set represented by GA, recombines and produces best of rule sets.

• Michigan-style LCS - focuses on choosing best within a given rule set.

Reinforcement learning. As described above for optimizing behavior.

Self-organizing map. Similar to Kohonen map, it uses unsupervised learn-ing to produce low-dimensional representation of trainlearn-ing samples, while keeping the topological properties of input space. Uses a feed forward network structure with weights to choose neurons and produce Gaussian functions.

Memetic algorithms. Learning algorithms, a combination of swarm opti-mization and genetic algorithms. Each individual program is chosen from a population and allowed to evolve. Each individual uses a learning tech-nique to evolve either Lamarckian or Baldwinian learning. Lamarckain [113] theories use environments to change individuals, known as the adaptive force. Baldwinian [18] uses learning in genetic material of the individual. These are supported by trial-and-error and social learning theories. For instance, trait becomes stronger as a consequence of in-teraction with the environment. Individuals who learn quickly are at an advantage. Blackmore distinguishes the difference between these two modes of inheritance in the evolution of memes, characterising the Dar-winian mode as ‘copying the instructions’ and the Lamarckian as ‘copy-ing the product’ [24]. Each program is treated as a meme. The next step involves these memes to coevolve to fit the problem domain.

Chapter 3

Designing X-Agents Using FLAME

3.1 FLAME and Its X-Machine Methodology . . . 44 3.1.1 Transition Functions . . . 47 3.1.2 Memory and States . . . 47 3.2 Using Agile Methods to Design Agents . . . 48 3.2.1 Extension to Extreme Programming . . . 51 3.3 Overview: FLAME Version 1.0 . . . 51 3.4 Libmboard (FLAME message board library) . . . 54 3.4.1 Compiling and Installing Libmboard . . . 55 3.4.2 FLAME’s Synchronization Points . . . 57 3.5 FLAME’s Missing Functionality . . . 58 The Flexible Large-scale Agent Modeling Environment (FLAME) was devel-oped through collaboration of the Computer Science Department at University of Sheffield (UK) and the Software Engineering Group (STFC) at Rutherford Appleton Laboratory, Didcot (UK).

The framework is a program generator that enables creation of agent-based model simulations that can easily be ported onto high performance computing grids (HPCs). The modeler defines models using XML notations and associated code for agent functions is given in C language. FLAME is able to use its own templates, to generate serial or parallel code automatically, allowing complex parallel simulations to execute on available grid machines.

FLAME agent models are based upon extended finite state machines (or X-machines) that allow complex state machines to be designed and validated.

The tool is being used by modelers from nearly all disciplines - economics, biology or social sciences to easily write their own agent-based models, run on parallel computers, without any hindrance to the modelers to learn how parallel computing works. The toolkit was released as an open source project, in 2010, via its web page (www.flame.ac.uk).

Agent structure, their messages and functions are defined in the model description file. The model description file, written in an XML format, is fed into the FLAME framework to generate a simulation program. The simula-tion program generator for FLAME is called the Xparser (Figure 3.1), which is a series of compilation files, compiled with GCC (Gnu Compilers) and ac-companying files, to produce a simulation package for running simulations.

Various parallel platforms like SCARF, HAPU or IceBerg have been used

43

FIGURE 3.1: Block diagram of FLAME. cf. [76].

in the development process to test the efficiency of the FLAME framework [41, 39].

Im Dokument Agent-Based Modeling (Seite 69-73)