• Keine Ergebnisse gefunden

Implementation of the Sprouts game and evaluation of varying strategies

N/A
N/A
Protected

Academic year: 2022

Aktie "Implementation of the Sprouts game and evaluation of varying strategies"

Copied!
43
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

GOETHE UNIVERSITY FRANKFURT

Implementation of the Sprouts game and

evaluation of varying strategies

Bachelor Thesis

Alex A. Hańczak

8/18/2016

(2)

2

“Most battles are won before they are fought”

-Sun Tzu

(3)

3

Declaration

I declare that this thesis was composed by me, that the work contained herein is my own except where explicitly stated otherwise in the text, and that this work has

not been submitted for any other degree or professional qualification except as specified.

(4)

4

Table of Contents

Sprouts ... 5

Rules ... 5

Number of Moves ... 5

Example ... 7

Search Algorithms ... 8

MinMax Algorithm ... 8

Depth-first search ... 8

Alpha-beta pruning ... 10

Example ... 12

Term analysis ... 13

Haskell ... 14

Implementation ... 16

The program ... 18

Correctness of Notation ... 30

Evaluation... 37

Conclusion ... 42

Bibliography ... 43

(5)

5

Sprouts

Sprouts is a pencil and paper game for two players. It was invented in 1967 by John Horton Conway and Michael S. Paterson at the University of Cambridge. The game begins with n spots on a sheet of paper; both players alternately connect these spots and add a new spot on the drawn line, until one player cannot make his move and therefore loses the game.

Rules

There are n spots on a piece of paper, which is our game board or playing surface.

The players take turns, where each turn consists of drawing a line between two spots (or from a spot to itself), and creating a new spot on the drawn line. Drawing lines and adding spots is limited by the following rules:

 The line must not touch or cross itself or any other line.

 The newly created spot is not allowed to be an endpoint of the line, and thus splits the line into two parts.

 No spot can have more than three lines connected to it.

When there are no legal moves remaining, the player who made the last move wins the game (in misere play the player who made the last move loses the game).

Number of Moves

At the start of the game we have n spots, with three possible lines to be connected to each of them. With every turn we create a new spot which has already two connected lines (one line can still be drawn to this spot); and we reduce the number of possible connections by one for both spots that we connect. According to this calculation we start with 3n possible lines and with every turn we reduce them by one (minus two for both connected spots and plus one because of the newly created spot). A game of Sprouts has a maximum of 3n-1 turns, before there are no more legal moves left.

We have two kinds of moves every turn; creating a circle or connecting two constructs of spots (or single spots). When encircling, we also have the possibility to draw the circle around some spots or constructs, or to leave them outside. By splitting those spots through a circle we create two different regions, where the

(6)

6 spots can only be connected among themselves. From strategic point of view the players can cut off spots, so there is no legal move to get three lines connected to them.

With that knowledge we can calculate the minimum number of turns for an n-spots game:

At the end of the game each survivor (spot which has still the possibility to have another connection) has exactly two spots as neighbors with no connections left.

None of the dead spots can be a neighbor of two different survivors otherwise it would be possible to connect those two surviving spots. We call the dead spots which are not neighbors of a survivor pharisee (from the Hebrew for “separated ones”). So we can define the number of spots at the end of the game through the following equations:

initial spots + moves = total spots at the end of the game total spots at the end of the game = survivors + neighbors + pharisees

n + m = 3n – m + 2(3n – m) + p By rearranging this equation we get for the number of moves:

m = 2n + p/4

This means that a game of Sprouts will last at least 2n moves if the number of pharisees is zero, and also that the number of pharisees at the end of the game is always dividable by 4.

(7)

7

Example

In the following example by David Applegate, Guy Jacobson and David Sleator from “Computer Analysis of Sprouts” we can see a complete game tree for a game with two spots.

Complete game tree for a 2-spots Sprouts Game1

As we already demonstrated the maximal number of moves is 3n – 1, in this example 3x2 – 1 = 5 and the minimal number of moves is 2n, which are 4 here.

We can see in this example that within a few turns a game which started with only two spots can get very complex; try to imagine a tree which started with 5, 10 or even 20 spots, the number of possible moves grows very high.

1 Source:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.32.9472&rep=rep1&type=pdf

(8)

8

Search Algorithms

Even though the game Sprouts seems to be simple, we see within only a few turns that the number of possible moves gets very big and making the right decision is very difficult. For this reason we need search algorithms so that our computer program can evaluate the situations and by forward thinking, it can choose the best calculated move.

MinMax Algorithm

The minmax algorithm is a recursive algorithm to find the next move in an n- player game. In our game we have two players so we can easily use this algorithm.

The objective for this algorithm is to calculate a value for a specific state of the game. In a two player game we try to maximize the value for one player, while the other player tries to minimize it and always chooses the move which has the smallest value.

Before we can use this algorithm we have to create a search tree, with all possible states and moves. Though some moves will probably create the same states, we will have a search diagram. With the diagram ready we can start evaluate:

The core of minmax is a depth-first search:

Depth-first search

Depth-first search is an algorithm to traverse or search through tree or graph data structures. It starts in the root of our structure and explores the graph or tree as far as possible, by going always to a next branch until there is none left. In that case it is backtracking to find a new path. This algorithm runs until it found the searched point or until every node has been visited.

Now, that we know how depth-first search works, we can take a look again on the minmax algorithm. It starts like the depth-first search at the root which is the current state of the game. We go through the turns down to a terminal node and give this node a value, whether it is a winning situation for player one or for player two or it is a draw. A common method is to give a win for player one the value +1, for a draw 0 and for a win for player two the value -1. From this position the algorithm backtracks and checks the sibling where he also goes to the terminal

(9)

9 node if it isn’t already. After all siblings have been visited and valued, the parent takes the value dependant on which players turn it is. For a maximizing player he will take highest value, while for a minimizing player the smallest value will be preferred. This process is repeated until every node has a value and we can select the optimal path. By giving every node a value we can also see who will win the game out of our current/starting state, by checking the roots’ value, which gives us the optimal outcome.

Some games take very long until there is a final state or have a lot of nodes to check. When we play a game we don’t want to wait forever until our algorithm has rated every situation, that’s why we can limit the depth of search. Depth-first search will go only to a specific depth and rate this node based on a heuristic;

everything else in the algorithm stays the same. The heuristic will give estimation about the quality of the situation and is an approximation to the true value. With this version of minmax we can quickly decide for a move. The negative point is that we have to run this algorithm more often if we set a limit, and it’s also

possible that we miss the ideal play; that’s why we have to decide which request is more important to us, a faster runtime or the optimal result.

Pseudocode for the minmax with limited depth

2

01 function minimax(node, depth, maximizingPlayer) 02 if depth = 0 or node is a terminal node 03 return the heuristic value of node

04 if maximizingPlayer 05 bestValue := −∞

06 for each child of node

07 v := minimax(child, depth − 1, FALSE) 08 bestValue := max(bestValue, v)

09 return bestValue

10 else (* minimizing player *) 11 bestValue := +∞

12 for each child of node

13 v := minimax(child, depth − 1, TRUE) 14 bestValue := min(bestValue, v)

2 Source: https://en.wikipedia.org/wiki/Minimax#Pseudocode

(10)

10 15 return bestValue

Alpha-beta pruning

We have seen that the minmax algorithm needs a lot of time to give a result, since it has to visit every node of the search tree. There is a method how to improve minmax, which is called alpha-beta pruning.

The idea behind alpha-beta pruning is to remember some values and if that value is better or worse than the possible values in a part-tree, we don’t need to visit this part anymore, because we can’t expect anything that will change our result.

We take two values, alpha and beta, which describe the worst case for both players. Alpha is the minimal value for the player who wants to maximize his outcome, while beta is the highest value that the second player will get, who wants to minimize it.

If we visit a node which we want to maximize and its value is higher than our beta, we can cut off this branch, because it would be a value which our enemy player would not chose for us, since he already has better options; but if that turn gives us a value that is higher than alpha, we adjust alpha to this new highest value.

The same goes for the minimizing nodes, with a contrary rating: if the value is smaller than alpha it will be cut off and if it’s bigger then beta we adjust beta to the new highest value.

Pseudocode for alpha-beta pruning

3

01 function alphabeta(node, depth, α, β, maximizingPlayer) 02 if depth = 0 or node is a terminal node

03 return the heuristic value of node 04 if maximizingPlayer

05 v := -∞

3 Source: https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning#Pseudocode

(11)

11 06 for each child of node

07 v := max(v, alphabeta(child, depth - 1, α, β, FALSE))

08 α := max(α, v) 09 if β ≤ α

10 break (* β cut-off *) 11 return v

12 else

13 v := ∞

14 for each child of node

15 v := min(v, alphabeta(child, depth - 1, α, β, TRUE))

16 β := min(β, v) 17 if β ≤ α

18 break (* α cut-off *) 19 return v

This algorithm can save us some search time, but only if we really can find positions, which we don’t have to visit. In worst case the search tree is sorted in a way, so that we always find a better value and don’t have any chance to cut off any branches. We also have the problem with very deep trees, like we already had in the minmax algorithm, since both use the depth-first search; if the tree has infinite depth we would search forever and never find a result. Here again we can solve this problem by limiting the depth search. In that case we will probably stop searching before we find a final state and won’t have a value to compare with alpha and beta; therefore we have to use a heuristic, which will help us to rate the situation. A good heuristic should still give us a good approximation about the states real value, so it is important to find a strategy of how to rate specific situations. By limiting the depth we can calculate faster results, but as well as in minmax we will need to rerun this algorithm after every turn to find new

information about the game.

(12)

12

Example

As a simple example we take a look at two identical trees4. The only difference between these trees is their sorting. In the first tree the leaves are sorted from small to big.

Alpha-beta pruning will go through every branch in this tree. It starts at the root, and from there it goes to the node b. This node has the children from 1 to 4. We go through each of them to find the minimum, which is one. Then we go to c and search there for the smallest child, where we start with 5 and go upwards until we have visited every node. The smallest leaf here is 5, so the maximizer will chose path c. In this case we simply run a minmax search. The next example will show the chances of alpha-beta pruning:

4Image Source: http://www.ki.informatik.uni-frankfurt.de/lehre/SS2016/KI/folien/05-suche-3.pdf

(13)

13 Here the leaves are sorted from big to small. Alpha-beta pruning visits the node c this time and goes through its children. We find out that 5 is the smallest number so the minimizer will chose 5 in this branch. Afterwards we go to b and start checking its children. We start at 4 and already see that this value is already smaller then 5, so whatever will come in this branch the minimizer won’t choose anything that is higher than 5. So we can cut of the remaining children and don’t have to visit them anymore. The maximizer will choose the path c like he did in the previous example but he found the best move faster this time. Minmax search would have taken slower in this case because it would still go through every node.

Term analysis

The minmax algorithm has to go through every node to find a result. Even if we cut the search tree by setting a depth limit we have the runtime of:

O(c

d

)

with d being the maximal depth and c being the branching degree.

In alpha-beta pruning we have different runtimes, dependent on the order of our search tree. In the worst case we have to visit every node like in the minmax algorithm, which gives us a worst case runtime of:

O(c

d

)

By having a better order we can improve this runtime up to the best case:

O(c

d/2

)

or an average runtime within a randomly ordered search tree:

O(c

3d/4

)

with d being the maximal depth and c being the branching degree.

(14)

14

Haskell

Haskell is a purely functional programming language. It hast non-strict semantics and strong static typing. The biggest difference to imperative programming languages is that in Haskell you don’t change variables. After setting a value to a variable it is fixed and won’t be changed by any function, unlike in other

programming languages where you can set a = 4 for example and then run a function which divides it by 2 and the new value for a is then set to 2. In Haskell if a = 4 it stays 4 and no function will change it. The benefit in this method is that you always get the same result for the same input of variables.

Though that Haskell doesn’t change variables we use functions and work with their outputs to create a program. For example is a sum of a list a recursive function which takes the first element of the list and adds it to the sum of the remaining list. This goes down to the last element until every element is summed up.

The main characteristics for Haskell are recursions and if-then-else queries. If- then-else works the same as in iterative programming languages. We have a

statement and if it is true we execute the operation in then otherwise we do the else operation.

if statement == true then dothis else dothat

Sometimes a simple if-then-else is not enough. When we have many different options like for example if a number is between 1 and 10 we want to run an operation, if it’s between 11 and 20 we run another operation and if it’s even above 20 we want our function to run a third operation. For this case Haskell has a symbol called Guards “|”. Guards are used to differentiate between many options and not have to run if-then-else every time. For our mentioned example the source code would be following:

fun x

| 1 <= x <= 10 = operation1 | 11 <= x <= 20 = operation2 | otherwise = operation3

The otherwise statement comes into play when all the options above failed, or strictly speaking the statements resulted in a false. This example function still has

(15)

15 a little mistake, though we said that if x is above 20 we want to run operation3. So far operation3 would also be executed for a x smaller or equal 0. We had to add another statement and an operation for the case x <= 0.

In our program we will work a lot with lists. For this Haskell brings a lot of build- in list functions. Lists have the form [0, 1, 2, 3, 4,…]. The operator “++” adds two different lists to one:

[1,2,3,4] ++ [9,8,7,6]

gives us the result:

[1,2,3,4,9,8,7,6]

We can also add lists of lists, but the type of the content of both lists must be kept the same. If we want to know the value of one specific position in our list we can find this by using the “!!” operator. To use “!!” we type it after our list and set the position of the element which we want to know. Lists start with the position 0 so if we want to know the first element we have to type “xs !! 0”, where xs is the name our list.

To get the first n elements of a list we use the command “take”, while if we don’t want to have the first n elements in the list we can remove them with the command

“drop”. We will use these commands in our program to work on one specific element, and to recreate the whole list by taking the first part, and the last part by dropping the first part. Here is an example on how “take” and “drop” work:

take 3 [1,2,3,4,5] results [1,2,3]

drop 3 [1,2,3,4,5] results [4,5]

Another interesting feature of Haskell are list comprehensions. Those look and behave like mathematical set comprehensions. We will use them to create specified lists, where we want to have a function applied on every element of a list. One simple example is to take a list from 1 to 10 and double the values. This can be done easily with the following list comprehension:

[2*x | x <- [1..10]]

We also use here the possibility of using ranges; we can create lists that have a clearly arithmetic sequence and can be enumerated, by typing the first few elements of the list and the last one. Between the first and the last elements is a

(16)

16

“..”, which shows that there are all those number taken which fit the sequence. So the list given in the top example has the range from 1 to 10. We can also filter the list by setting more conditions, like for example that we only want to take numbers that are odd:

[x | x <- [1..10], odd x]

Like in other programming languages, Haskell also knows the data structure tuple.

These are similar to lists, but the most important differences are that they have a fixed size and they don’t have to contain only elements of the same type. This will help us store information with different types. For example we can create a tuple which first element is a letter and the second element is a number. To address the first element we use the function fst and to address the second element we use snd.

fst ("a",42) results "a"

and

snd (22,False) results False

With this basic knowledge on Haskell we will try to implement the game into a Haskell code.

Implementation

For human being it feels more comfortable to have a visual game, where you can see connections and can imagine your next steps. A computer program doesn’t need this and it’s actually easier to work on a mathematical interpretation of the game.

For this cause we will use the algebraic definitions of the game sprouts, given by Julien Lemoine and Simon Viennot in their “A further computer analysis of sprouts”5.

First of all we need to define our game board, so that the computer can understand and work on it. Definition 2.1. gives us this representation:

Definition 2.1. A representation is a string of characters verifying the following rules:

5Source: http://download.tuxfamily.org/sprouts/sprouts-lemoine-viennot-070407.pdf

(17)

17 • The last character of the string is the end-of-position character: “!”

• A representation is the union of regions, the end-of-region character is “}”

• A region is a union of boundaries. The end-of-boundary character is “.”

• A boundary is a union of vertices.

The set of representations will be denoted by R0. The initial representations are A.}! (1- spot game), A.B.}! (2-spot game), A.B.C.}! (3-spot game)...

The next step is to see how players do their turns and how a move does change that game board. We have two different kinds of moves; connecting two spots which are in different boundaries which merges both boundaries to one, or we can connect two spots from the same boundary (or the spot with itself) to create a circle.

Definition 2.2. tells us how a connection of two spots from different boundaries looks like and what changes on the game board are made:

Definition 2.2. x1...xm and y1...yn are two different boundaries in the same region, with m

≥ 2 and n ≥ 2. We suppose that xi and yj are vertices that occur two times or less in the whole representation, with 1 ≤ i ≤ m and 1 ≤ j ≤ n. A two-boundaries move consists in merging these two boundaries in x1...xizyj...yny1...yjzxi ...xm. The same definition holds if m

= 1 or if n = 1, but in these cases, xi ...xm and yj...yn are empty boundaries.

Encircling other spots creates a new region, in which we only can connect spots with others from the same region. Definition 2.3. takes care of this by splitting up the boundary in two regions, to represent the inner and the outer circle:

Definition 2.3. x1...xn is a boundary, with n ≥ 2. We suppose that xi and xj are vertices that occur two times or less in the whole representation, with 1 ≤ i ≤ m, 1 ≤ j ≤ n and i =/= j, or that i = j and xi occurs only one time in the whole representation. A one-boundary move consists in separating the other boundaries in the same region in 2 sets B1 and B2, and in separating the region in two new regions : x1...xizxj...xn.B1} and xi ...xj z.B2}. The same definition holds if n = 1, but in this case, xj ...xn is an empty boundary.

Those Definitions are everything we need to create a running program, with the ability to make moves and understanding which moves are not legal. The number of lives for each spot we can easily find out by counting their occurrence in the whole game representation.

(18)

18

The program

Now we can start to translate these definitions into Haskell-code. For our game board we define the spots as an own data type “verticle”, which contains an ID, the boundary and its boundary position.

data Verticle = Vc { id :: Int

, boundary :: Int , bndposition :: Int } deriving (Show, Eq)

The ID will help us to find the same spot in different regions and boundaries. Our final game board will be a list of lists, which contain the vertices. By putting the boundary inside the vertex information we don’t have to make these lists more complex. The boundary position helps us to keep the connections sorted, so that we can use the definitions.

To create our first state of the game we use a list comprehension, where we set up n vertices for our n-spot game and every vertex is in another boundary with the boundary position set to one.

start n = [[Vc m m 1 | m <-[1..n]]]

Although we can find out the number of lives for every vertex by going through the whole game representation and counting its appearance, it may be better to just create a separate list where we save our lives. Every position in that list stands for the vertices’ ID. Haskell starts lists positions with 0 so to find out the lives of a vertex we take its ID minus 1 and look it up in the lives-list.

The representation of the game will be a tuple of our lives list and the game board representation. Our lives list and starting tuple for an n-spots game look like this:

startlives n = take n (repeat 3) ready n = (startlives n, start n)

Now that we have a game board representation and a lives list defined, we need to work on making turns possible. For this cause we start with some simple

functions, which we will use to build up the two main functions, to get definitions 2.2. and 2.3. running.

(19)

19 vcid (Vc a _ _) = a

vcinbd (Vc _ a _) = a vcbdpos (Vc _ _ a) = a

We use these three functions to get the information about the vertex and its boundary. From this we define the function getoneBd, where we take a vertex a and a region from our game board and create a list, which contains all vertices that are in the same boundary like our vertex a:

getoneBd :: Int -> Int -> [[Verticle]] -> [Verticle]

getoneBd a b xs = filter isinBd (xs !! b) where

isinBd x = vcinbd x == a

Here we select the region from the game board, in which we want to work, and get a list of these regions vertices. We filter those vertices which are in the same boundary as our vertex a and leave only them in the list, so that we actually have a list of one boundary.

These functions are enough to connect two vertices in two different boundaries.

We will take the lists of boundaries, which we create with getoneBd for both vertices, and by cutting these lists and remerging them we get our final boundary.

Both lists represent the two different boundaries x1…xm and y1…yn in the same region as we need them in Definition 2.2. These lists always need to be sorted by the boundary position of the vertices otherwise we would get a mistake when merging both boundaries. Our vertices that we want to connect have the position xi

and yj, so that we can create the constructs x1…xi, xi…xm, y1…yj and yj…yn by taking the first i or j elements out of the boundary list or by dropping them. With these constructs we just have to merge them together in the way we want to have them: x1…xizyj…yny1…yjzxi…xm. For z we create a new vertex, where we set the ID equal to the length of our lives list plus 1 and the boundary to the same as our vertex a has.

connectvc a b s ls xs

| m == 1 && n == 1 = x1_xi ++ z ++ yj_ym ++ z

| m == 1 && n > 2 = x1_xi ++ z ++ yj_ym ++ z ++ xi_xn | m > 1 && n == 1 = x1_xi ++ z ++ yj_ym ++ y1_yj ++ z | otherwise = x1_xi ++ z ++ yj_ym ++ y1_yj ++ z ++ xi_xn where

m = length mlist n = length nlist

mlist = getoneBd (vcinbd b) s xs nlist = getoneBd (vcinbd a) s xs

(20)

20 x1_xi = take ((vcbdpos a)-1) nlist ++ [Vc (vcid a) (vcinbd a) (vcbdpos a)]

z = [Vc (length (ls) +1) (vcinbd a) 1]

yj_ym = [Vc (vcid b) (vcinbd b) (vcbdpos b)] ++ drop (vcbdpos b) mlist

y1_yj = take (vcbdpos b) mlist ++ [Vc (vcid b) (vcinbd b) (vcbdpos b)]

xi_xn = [Vc (vcid a) (vcinbd a) (vcbdpos a)] ++ drop (vcbdpos a) nlist

We have to make a difference between the cases when a boundary is only one vertex long, so that we leave xi…xm or yj…yn empty.

This function created a new list where have our new merged boundary. The vertices in that list are sorted as we want them to be in that boundary so now we have to adjust the vertices information about their boundary position. We will do this by counting up from one and give the vertices their new information.

newpos xs = newpos' 1 xs where

newpos' a (x:xs) = Vc (vcid x) (vcinbd x) (a) : newpos' (a+1) xs

newpos' a [] = []

newpos runs recursively through our boundary list and changes the boundary position for each vertex until the whole list is done.

Another important change we have to do is the boundary information of the vertices. For now they still have their old boundary set up. Therefore we go through the list and check for all vertices which are in the same boundary as our second vertex, and change their boundary to the boundary of vertex a:

changebd a b c = if vcinbd b == vcinbd c then Vc (vcid c) (vcinbd a) (vcbdpos c) else c

connectbd :: Verticle -> Verticle -> [[Verticle]] -> [Verticle]

connectbd a b xs = [(changebd a b c)| c <- (xs !! 0)]

We will use this function in our main function where we take everything together to create a new game board representation.

What we still have to change is our lives list, since we connect two vertices their number of lives has to be reduced and we have to extend the list to create the new vertex which has only one life left.

(21)

21 Here we go through the lives list and reduce the number of lives when we reach the connected vertices IDs.

updatelives :: (Eq a, Num a, Num a1) => a -> [a1] -> [a1]

updatelives a (x:xs) | a == 1 = (x-1) : xs

| otherwise = x : updatelives (a-1) xs

The final connecting function is a sum of everything:

conVcinBds a b s ls xs = (lives,((take (s) xs) ++ [(newpos

(connectbd a b [(connectvc a b s ls xs)])) ++ (notinBd (vcinbd a) (vcinbd b) s xs)] ++ (drop (s+1) xs)))

where

lives = updatelives (vcid a) (updatelives (vcid b) ls) ++

[1]

We create the tuple of lives and game board, with lives being the updated lives list, which we append the one live of our newly created vertex. In the game board we change only the region where we connect the two vertices. The regions before and after in our list don’t change, and stay on their positions in the game list. The region itself consists of the connected boundary, which we create with connectvc and change the boundary and position information with connectbd and newpos;

and of all vertices which are not in one of the connected boundaries. We find them with the function notinBd, which takes the boundary number of both vertices and filters the region, so that only the vertices are left, which are neither in boundary of vertex a nor in boundary of vertex b.

notinBd :: Int -> Int -> Int -> [[Verticle]] -> [Verticle]

notinBd a b s xs = filter nobd (xs !! s) where

nobd x = vcinbd x /= a && vcinbd x /= b

By adding those lists (merged boundary and other vertices) we get the whole region, and the output of conVcinBds is the complete state of the game after connecting two vertices from different boundaries.

Another possible kind of move is to create a circle. We will use the same method as we used in connectvc: splitting the boundary into parts and merging them to a new one. Creating a circle is splitting the region into two different regions and our boundary is also split up into them. We have the inside of the circle and the outside, and moreover we can decide which spots we want to have encircled.

(22)

22 That’s why our encircling function will also need a list as an input to know which vertices are going to be inside the circle. Though we can’t split up boundaries by creating a circle through them the list just needs to contain the boundaries which we want to be encircled. The boundaries which are outside the circle we can find out by filtering the region for all boundaries which are encircled.

For this cause we write two functions. The first on will give us a list of all vertices in the boundaries which we want to be outside of the circle. The second one will take this list and filter those vertices from our region where we create the circle.

selectallwith s xs xxs = concat [(getoneBd a s xxs) | a <- xs]

fun2 s xs xxs = (xxs !! s) \\ (selectallwith s xs xxs)

selectallwith uses the function getoneBd which we already used before to create lists of boundaries. Here we create for all boundaries from our given list their own boundary list, and afterwards we concat them so that we get all vertices from these boundaries in one list. And fun2 removes all vertices that we got in our list out of selectallwith from the region in which we work. That way we have the boundaries from inside the circle separated and the boundaries from the outside.

We create the circle with the encircle function where we give both spots which we want to connect, the region where they are, the list of boundaries which we want to be outside of the circle and the current game state, which is composed of the lives list and the game board representation. If we connect a spot with itself we have to differentiate between a single spot and a boundary in which the spot is included.

We do this by taking a look on our boundary’s length. If it has the length of only one element, we leave xj…xn as an empty boundary otherwise we keep it.

encircle a b s ls xs xxs

| (length nlist) == 1 = (lives,((take (s) xxs) ++ [newpos (x1_xi ++ z) ++ selectallwith s xs xxs] ++ [(newpos (xi_xj ++ z)) ++ fun2 s (xs ++ [vcinbd a]) xxs] ++ (drop (s+1) xxs)))

| otherwise = (lives,((take (s) xxs) ++ [newpos (x1_xi ++ z ++

xj_xn) ++ selectallwith s xs xxs] ++ [newpos (xi_xj ++ z) ++ fun2 s (xs ++ [vcinbd a]) xxs] ++ (drop (s+1) xxs)))

where

nlist = getoneBd (vcinbd a) s xxs z = [Vc (length (ls) +1) (vcinbd a) 1]

x1_xi = take (vcbdpos a) nlist

xi_xj = take ((vcbdpos b) - ((vcbdpos a) -1)) (drop ((vcbdpos a) -1) nlist)

xj_xn = drop ((vcbdpos b) -1) nlist

(23)

23 lives = updatelives (vcid a) (updatelives (vcid b) ls) ++

[1]

The function works the same as connectvc; first we take the boundary where our vertices are connected to as a list x1…xn. Then we split that list up into the parts x1…xi, xi…xj and xj…xn. We rearrange them and create two regions on which we split them up. The first region gets the list x1…xizxj…xn where we have to update the vertices boundary position with newpos. To the first region we enclose the list of all vertices which are supposed to be outside of the circle. We find them with the function selectallwith. The second region that we create contains xi…xjz and the vertices which we got by fun2, so the vertices which are inside the created circle. The other regions stay untouched and are added ahead and behind the region on which we worked. The lives list is again updated with the updatelives function. We run it first for vertex b on our given lives list and afterward for vertex a on the resulted list. At the end we have to add the newly created vertex with its one left live to the list.

These two functions, conVcinBds and encircle, are the core of our game, and we could already start playing against ourselves. At this point it would be a very exhausting process, since we had to type in every game state manually and there is still no verification that the move is legal. We have a lives list but it only gets updated so far and there is no limit set, so that the number of lives ends at 0 and afterward that vertex can’t be used anymore. Though we want the computer to play against itself to test different strategies, we will need to create a successor function, where every possible state is given that we can reach in one turn.

When we are connecting two spots of the same boundary we create a circle. The size of the circle is dependent on how we draw it and we can also encircle spots to lock them in, or leave them outside to lock them out, so we have many different possibilities about our next state. To find all those possibilities we define the function powset which gives us the power set of all the combinations how the boundaries can be encircled.

powset [] = [[]]

powset (x:xs) = ys ++ [x:y | y <- ys]

where

ys = (powset xs)

We take a list of numbers and get the power set of it. To get this list of number we first have to find out all the boundaries, which can be encircled. The function

(24)

24 circable takes the vertices that we use and gives us all boundaries from the region which are possible encircle candidates.

circable a b s xs = rmdups ((notinBd (vcinbd a) (vcinbd b) s xs) # getVcBd)

In this function we first remove all vertices which are inside the boundary with the spots that we connect, so that we have a list of all vertices left. The next step is to find out their boundaries we get them by running the function getVcBd through the list:

getVcBd xs = [vcinbd (x)| x <- xs]

Here we have a list with all the boundaries left, but there are still duplicates in that list so we define a function rmdups to remove them.

rmdups xs = map head (group (sort xs))

This function sorts the list into groups of the same boundaries. Those groups are lists of the duplicates. By taking only the first element from each list we get a filtered list with unique boundaries.

Now that we have a list of all possible combinations of spots that can be encircled, we can start creating the successor states. For encircle we go through the list of all possibilities and run the function encircle for every possibility. circpot does exactly this and when there are no possibilities left it stops.

circpot n m s ls [] xxs = []

circpot n m s ls (x:xs) xxs = encircle n m s ls x xxs : circpot n m s ls xs xxs

To find all successor states we have to go through every possible connection, so we use the function fun1 which goes through a list and connects every element with another one or with itself. The result is a list of tuples with both elements representing one vertex which can be connected to the other one.

fun1 [] = []

fun1 (x:xs) = (x,x): (fs x xs) ++ fun1 xs where

fs x (y:ys) = (x,y): fs x ys fs x [] = []

(25)

25 We want to run this function on through our region, that’s why we define go to take a region from our game board representation and use fun1 on it:

go s xs = fun1 (xs !! s)

The next step will be to combine all those functions into one, so that we take all possible connections, go through them and create a list of the game states after the spots are connected. We should also check if these moves are legal by only

allowing them when the number of lives is 1 or more if we connect two different vertices, or 2 or more if we want to connect a spot with itself.

ergo s [] ls xxs = []

ergo s (x:xs) ls xxs = if vcinbd n == vcinbd m then ((if (livesofVc n ls) >= eqtest && (livesofVc m ls) >= eqtest then circpot n m s ls (powset (circable n m s xxs)) xxs else [([],[])]) ++ ergo s xs ls xxs) else ((if (livesofVc n ls) >= 1 && (livesofVc m ls) >= 1 then conVcinBds n m s ls xxs else ([],[])) : ergo s xs ls xxs)

where

n = fst x m = snd x

eqtest = if (vcid n) == (vcid m) then 2 else 1

This function takes the list of tuples that represent a possible connection and then decides if those vertices are in one boundary or in two different ones. If they are in one, we need to test first for equality, to ensure that a spot which is connected to itself has still enough lives left. Illegal connections result in a tuple of an empty lives list and an empty game board. Those useless states will be filtered later with the function filtered:

filtered xs = filter (/= ([],[])) xs

Back to the ergo function, we repeat this function until all possible connections are done once, and we get a list of tuples with the game states that result out of these connections. When drawing a circle we use the function circpot to instantly get all game states that can be created through locking down spots in a circle or leaving them outside. Connecting two different spots from different boundaries is simply done by the function conVcinBds.

Now we got all possible connections and their results for one specific region, but we want to have them for all regions. For this reason we define the function uber to take a list of numbers, which we will fill from 0 to the length of our game board list and run ergo for every element of that list.

uber [] ls xxs = []

uber (s:st) ls xxs = ergo s (go s xxs) ls xxs ++ uber st ls xxs

(26)

26 The function ober creates this list of numbers and returns every possible successor game state.

ober (ls,xxs) = uber [x | x <- [0..((length xxs)-1)]] ls xxs

The last step here is to filter that list of game states and remove those empty states that we created in ergo. We use the already presented function filtered for this cause:

now _ (ls,xxs) = filtered (ober (ls,xxs))

This function now is the final step so that we can start building up a game tree. The main input in now is the tuple, containing the lives list and the game board

representation, but we also give it another input which stands for the player. We will use this knowledge in our heuristic.

To decide which move to make we will use alpha-beta pruning. The following source code for alpha-beta pruning was provided by Prof. Dr. Manfred Schmidt- Schauss. Let’s take a look on how it works:

alphaBetaEinZug bewertung nachfolger maxtiefe neginfty infty spieler zustand =

ab 0 neginfty infty spieler zustand where

ab tiefe alpha beta spieler zustand

| null (nachfolger spieler zustand) || maxtiefe <= tiefe = (bewertung zustand spieler,Nothing)

| otherwise =

let l = [z | z <- nachfolger spieler zustand]

in

case spieler of

Maximierer -> maximize tiefe alpha beta l Minimierer -> minimize tiefe alpha beta l

maximize tiefe alpha beta xs = it_maximize alpha neginfty Nothing xs

where

it_maximize alpha alpha_l z [] = (alpha_l,z) it_maximize alpha alpha_l z (x:xs) =

let

(arek,_) = (ab (tiefe+1) alpha beta Minimierer x) alpha_l' = max alpha_l arek

in if alpha_l' >= beta then (if debug then

trace ("Abgeschnitten (Groesse):" ++ (show $ sizeOfTree nachfolger x Maximierer)) else id)

(alpha_l',Just x) -- else

(27)

27 let (aneu,zneu) = if alpha >= alpha_l' then

(alpha,z) else (alpha_l',Just x)

in it_maximize aneu alpha_l' zneu xs

minimize tiefe alpha beta xs = it_minimize beta infty Nothing xs where

it_minimize beta beta_l z [] = (beta_l,z) it_minimize beta beta_l z (x:xs) =

let (brek,_) = (ab (tiefe+1) alpha beta Maximierer x) beta_l' = min beta_l brek

in if beta_l' <= alpha then (if debug then trace

("Abgeschnitten (Groesse):" ++ (show $ sizeOfTree nachfolger x Minimierer))

else id) (beta_l',Just x) -- else

let (bneu,zneu) = if beta <=

beta_l' then (beta,z) else (beta_l',Just x)

in it_minimize bneu beta_l' zneu xs

The function starts by taking a valuation, which will be our heuristic, a successor function, which will be now _ (ls,xxs), the depth of how deep we want our search to go, and neginfty and infty, which are the starting values for alpha and beta, so they should be chosen bigger then every value that we get from the heuristic. We also give the starting player (Maximierer or Minimierer) and the starting state; in our case ready n, which contains startlives n and start n. It gives us the best move for one turn dependent on the searches depth and the quality of the heuristic.

alphaBetaEinZug consists of three sub functions: ab, maximize and minimize. We start with the sub function ab

ab tiefe alpha beta spieler zustand

| null (nachfolger spieler zustand) || maxtiefe <= tiefe = (bewertung zustand spieler,Nothing)

| otherwise =

let l = [z | z <- nachfolger spieler zustand]

in

case spieler of

Maximierer -> maximize tiefe alpha beta l Minimierer -> minimize tiefe alpha beta l

Dependent on which players turn it is the function runs either maximize or minimize. Both sub functions are actually the same, their only difference is on which value they work. maximize prioritizes the alpha value while minimize changes the beta. Here we go through maximize to show how it works:

(28)

28 maximize tiefe alpha beta xs = it_maximize alpha neginfty Nothing xs

where

it_maximize alpha alpha_l z [] = (alpha_l,z) it_maximize alpha alpha_l z (x:xs) =

let

(arek,_) = (ab (tiefe+1) alpha beta Minimierer x) alpha_l' = max alpha_l arek

in if alpha_l' >= beta then (if debug then

trace ("Abgeschnitten (Groesse):" ++ (show $ sizeOfTree nachfolger x Maximierer)) else id)

(alpha_l',Just x) -- else

let (aneu,zneu) = if alpha >= alpha_l' then (alpha,z) else (alpha_l',Just x)

in it_maximize aneu alpha_l' zneu xs

The final step to finish our program is to get the computer playing against itself, based on the knowledge that he gets from alpha-beta pruning. To create a play we have to switch between the turns of maximizer and minimize. We define the function maximierer, which will represent the turn of the maximizer in our game:

maximierer zustand tiefe = do

let (_,a) = alphaBetaEinZug heuristic now tiefe (-3) (3) Maximierer zustand

case a of

(Just next) -> do

putStrLn ("Maximizer: " ++ (show next)) minimierer next tiefe

Nothing -> putStrLn "Minimizer has won"

In his turn the maximizer takes the current state of the game and also the

information about the searches depth, and runs alphaBetaEinZug for this state. As we already know alphaBetaEinZug is the function for alpha-beta pruning which gives us the best move for one turn, dependant on search depth and quality of the heuristic.

With the best move, we have a successor state which we will use to run

minimierer. Otherwise if we don’t get a move we know that there is no legal move left and the other player wins the game. To recreate the game for the viewer we print out every turn.

(29)

29 The minimierer function works the same as the maximierer, only that in the

heuristic there will be other preferred values. Both functions alternate until one player has no legal move left and therefore loses the game.

minimierer zustand tiefe = do

let (_,a) = alphaBetaEinZug heuristic now tiefe (-3) (3) Minimierer zustand

case a of

(Just next) -> do

putStrLn ("Minimizer: " ++ (show next)) maximierer next tiefe

Nothing -> putStrLn "Maximizer has won"

To start those two functions we create the function main, where the user can set up the number of spots and depth of how deep alpha-beta pruning is going to search.

main = do

putStrLn "Set a starting number of spots:"

n <- getLine

putStrLn "Set a depth of search:"

m <- getLine

maximierer (ready (read n)) (read m)

Those inputs are used to generate the game board and are given to maximierer, who is our starting player, to make his turn and start the game.

At last we have to define a heuristic. In the following chapter we will think of different possible strategies, implement them in our code and evaluate their quality.

(30)

30

Correctness of Notation

All the work that we have done so far is useless, if the notation that we

implemented was wrong, so this chapter is about the examples why this notation works correct and all states are covered.

First of all, we start with a simple connection of two vertices. If we connect spot A with spot B, we create a new one on the drawn line, which has already two

connections.

In our notation it would be ACBC.}!. A is connected to B via the point C, which is noted two times in the representation for both connections. On paper it would look like this:

Another possible move is to create a circle. We can do this by connecting a spot with itself or with another spot from the same boundary. In this example we connect A with itself, so that we create two regions.

The notation splits this boundary in two regions, the outer and the inner one:

AB.}AB.}!. This gets clearer if we have more spots, so that we can encircle them.

(31)

31 In this example we started with three spots A.B.C.}! and created a circle, where we put B inside that circle while we left C outside. There is no legal move to connect B with C anymore. The notation causes this by splitting them both in different regions: AD.B.}AD.C.}!. Now we can only connect B with the spots in the inner circle (or region), and C can only be connected to the spots outside the circle.

The same applies to creating a circle by connecting two different vertices from the same boundary. In the example we have 3 starting spots and connect A and first, so that we get ADCD.B.}!. All spots are still in the same region, but A, C and D belong to the same boundary. The next turn we create a circle by connecting A to D, while encircling the spot B. In the notation it means: AEDCD}ADE.B}!.

We can’t connect B and C anymore, as they are in different regions, and we see this in the image as well, that we can’t draw a line between the two spots without crossing another line.

In the next example we will show how the position of the connected vertices in the notation influences the drawn line. We start with 4 spots and connect A with B and D with C. So that our representation is AEBE.DFCF.}!.

(32)

32 In the next turn we want to connect E with F. Here we have four different

possibilities how to connect them. In the notation it happens by which letters we want to connect. We can connect the first E with the first F, or the first E with the second F, and so on. If we connect the first E with the first F we get the game representation AEGFCFDFGEBE.}!; drawn it will look like this:

But why does it look like that? The answer will be clear when we go further and connect A with D to create a circle. From the notation we get:

AHDFGEBE.}AEGFCFGDH.}!. Here we have B encircled, so that B and C are split and in two different regions, so that they can’t be connected anymore. The same happens, if we connect the last E with the last F, only that this time it is C which gets encircled, with a connection between A and D.

(33)

33 To draw a straight connection between E and F we have to set the connection between the second E and the first F in our representation: AEBE.DFCF.}!.

Here we get AEBEGFCFDFGE.}!, so that if we connect A with D this time we would get a small circle, where B and C are on the outside. We can also draw a line around A and D to connect E and F; in that case we have to connect the first E with the second F in our notation.

The difference here is visible if we try to connect B and C. We would encircle A and D, while when drawing a straight line we would keep them outside of the circle.

For our computer programm it doesn’t matter how we draw the lines, but as we try all possible connections it will cover all cases. With these examples we wanted to show that the representation takes care of different line drawing and that it works as intended by creating regions, so that no lines get crossed.

(34)

34

Heuristics

The game sprouts can get very complex for a higher amount of starting spots. To find the right move we will use search trees, but to go through every node it can take a long time, dependent of the number of starting spots. Therefore we use heuristics to calculate an estimated value, so we don’t have to search deep and can approximately find a good move.

The most important characteristic of our heuristic has to be that it finds final states and values them right, so that the player will choose them no matter what his current strategy was. That means that the player will adhere to his strategy until he finds a possibility to win the game, then he just goes for the win. We assure this by adding an option to our heuristic that if a final state is found we will value it at maximal height so that it will definitely get chosen as the optimal move.

heuristic (a,b) sp

| now sp (a,b) == [] = case sp of Maximierer -> -2

Minimierer -> 2 |...

In this example the value is set to 2 (or -2), so that we would have to make sure that those values are the maximum and minimum and every other value that we get from our heuristic lies in between them.

One simple approach to create a heuristic could be the attempt to maximize the number of regions, or the opposite to minimize the number of regions and try to connect vertices only within their region without creating circles. We implement this heuristic by taking the length of the game list and set this length as the value of the state:

heuristic (a,b) sp

| now sp (a,b) == [] = case sp of Maximierer -> -1000000

Minimierer -> 1000000 | otherwise = case sp of Maximierer -> -summe Minimierer -> summe where

summe = length b

(35)

35 In this function we want to minimize the number of regions. We do this by setting the successor states length negative for the maximizer, so that he will try to find the one with the smallest length, so more specific said: with the least regions;

while on the other hand the minimizer gets the length of the list as its value and tries to minimize it. The other possibility of maximizing the number of regions can be reached by changing the position of the “-“.

heuristic (a,b) sp

| now sp (a,b) == [] = case sp of Maximierer -> -1000000

Minimierer -> 1000000 | otherwise = case sp of Maximierer -> summe Minimierer -> -summe where

summe = length b

Another strategy could be maximizing the boundary size. For this we have to find the boundary with the most elements and if possible try to add another boundary to it.

We do this by searching for the biggest boundary in every region and then

compare these to find the biggest boundary in the game. The bigger the boundary is the higher the value gets. The idea behind this strategy is to first create a boundary that is as big as possible and afterwards creating the circles, so that we don’t have single boundaries that get encircled.

To find the biggest boundary in every region we will go through the vertices of a region and check their boundary position. The vertex with the highest boundary position must be the one in the biggest boundary.

tok [] s = s

tok ((Vc _ _ m):xs) s | m > s = tok xs m | otherwise = tok xs s

maximumBoundLength reg = tok reg 0

We will map this function on every region in the game board representation, so that we get a list of the biggest boundaries for every region. From this list we take the maximum:

func regions = maximum (map maximumBoundLength regions)

(36)

36 The heuristic will use the result of this function and maximize, so that the player who uses this heuristic will try to build up the biggest boundary that is possible.

heu1 (a,b) sp

| now sp (a,b) == [] = case sp of Maximierer -> -1000000

Minimierer -> 1000000 | otherwise = case sp of Maximierer -> result Minimierer -> -result where

result = func b

So far we have only tried to base our strategy on the game board representation, but another important information is the lives list, and since the number of possible turns is based on the number of lives it seems that a successful strategy should work with this information.

In the last state where no more legal move is possible we have only regions left where there are 0 lives or one live left. That state we don’t want to reach so the primary goal should be to get the predecessor. To find him we need to find out the number of turns until the final state is reached.

We go through every region and sum up the lives of the vertices in that region.

Those, we have to divide by 2 so that we get the maximal number of possible turns in that region and the rest that is left after dividing is dropped since it can’t get to a move. With the knowledge of all moves per region we sum them up and check if the sum is odd or even. Though we want to have the last move, the desirable number of moves will be odd.

The supporting function findlivesinReg will go through the region and return a list with the lives of the vertices:

findlivesinReg ls [] = []

findlivesinReg ls (x:xs) = ls !! (x-1) : findlivesinReg ls xs

We use this function after we found out the vertices IDs and removed their

duplicates, otherwise we would have more lives in a region than we have possible moves. After creating the list with the number of lives per vertex, we sum them up and divide them by two.

(37)

37 bon ls [] xs = []

bon ls (u:us) xs = quot (sum (findlivesinReg ls (rmdups (getVcinReg u xs)))) 2 : bon ls us xs

Here we get a number which we will use to define our heuristic. We take the values from each region and sum them up, too. If the result is odd the move will be preferred otherwise it gets a negative value.

heu2 (a,b) sp

|now sp (a,b) == [] = case sp of Maximierer -> -1000000

Minimierer -> 1000000 | otherwise = case sp of Maximierer -> free Minimierer -> -free where

free = if odd (sum (bon a [0..((length b)-1)] b)) then - 100 else 100

From this heuristic we expect the best results, as it is using the most information to find the right move.

Evaluation

To compare the strategies and to evaluate their quality we take a look which player is supposed to win the n-spots game. In the “Computer Analysis on Sprouts” by David Applegate, Guy Jacobson and Daniel Sleator6, they found out the winners of the n-spots game for all games up to 11 spots. We will use their table to compare our winners with their strategy to the expected winner.

The results in this table are referring to the starting player. “L” stands for “the starting player has lost” and “W” for “the starting player has won”. The letters that are marked with a “*” are the results which have newly been proved in their work.

6 Source: “Computer Analysis on Sprouts” by David Applegate, Guy Jacobson and Daniel Sleator, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.32.9472&rep=rep1&type=pdf

(38)

38 First we analyze the heuristic which is trying to maximize the number region, so tries to draw as many circles as possible.

In this table we let both players play with the same strategy against each other.

Both try to maximize the number of regions. With a bigger depth of search the results should be equal to the winners and losers from the table by David

Applegate, Guy Jacobson and Daniel Sleator. As possible depth in this test we go from 3 to 6, otherwise the time to find a winner would grow too big.

We can see from this table that whatever depth we take for a 3-spots game the starting player wins. This is accurate to the expected winner. For the other n-spots games the winners and loser vary. In the 4-Spots game it totally depends on the depth of search who wins. If the depth is even the first player wins, otherwise the second player wins. For 5 spots we have most games won by player one, as it was expected, but there is still one lose in depth 5 search which is not perfect outcome.

In the 6 and 7-spot game the expected winner should be the second player, but as we can see the first player wins here most games, which means that this strategy is not favorable for the second player, if the first player is playing with the same strategy. In the end the first player won 14 out of 19 games, which is a great outcome for him, but also shows us the weakness of the strategy when playing as second.

Though the second player didn’t run well with this strategy we will change his strategy to creating the biggest boundary and hope that he will do better there.

Number of Spots

3 4 5 6 7

3 W L W L W

Depth of 4 W W W W W

search 5 W L L W L

6 W W W W

(39)

39 As we can see there is no change in the 3-spots game and in the 4-spots game it is still dependent on the depth of search. In the 5-spots game the new strategy got a better result, by winning at the search depth of 6, so that we have a tie there, and he also does better in the 6-spot game. Unfortunately the second player loses all games in the 7-spot game, which is actually expected from him to win, but with 6 out of 19 games won he has better chances to win then with the region-maximizing strategy.

We change the strategy of the second player again, and hope that this time he is more successful. The second player will use now the strategy based on the lives left in each region.

In this table we can see that this strategy was the worst. Only two wins for the second player in 19 games, especially lost all the games which were expected to be won by him (6- and 7-spot games).

So far we have seen that the first strategy, to maximize the number of regions was the most successful for the starting player, and the second player only had best chances with the second strategy there, to build the biggest boundary. Let’s see if the first player has even better results when he’s playing with an alternative strategy. The next table shows us the result of the starting player if he uses the boundary-maximizing strategy against the region-maximizing strategy.

Number of Spots

3 4 5 6 7

3 W L W L W

Depth of 4 W W W W W

search 5 W W L L W

6 W L L W

Number of Spots

3 4 5 6 7

3 W W L W W

Depth of 4 W W W W W

search 5 W W L W W

6 W W W W

Referenzen

ÄHNLICHE DOKUMENTE

12.— The redshift-space power spectrum recovered from the combined SDSS main galaxy and LRG sample, optimally weighted for both density changes and luminosity dependent bias

• Non-linear galaxy bias seems under control, as long as the underlying matter power. spectrum is

These conferences and publications and the policy visions they propagated as “The Kenya We Want” represented the changing political background and the reality

Several critics have raised concerns that the WGI are not comparable over time and across countries, that the indicators use units that set the global average of governance to

Recall that all the coefficients are significant at the 1 percent level, so the β i estimates provide strong evidence that all the monthly releases contain incremental information

As for the conductivity sensor, the result of calibration shows that a set of coefficient for the conversion from the frequency to the conductivity decided at the time of the

Repetitorium Impfen sowie eine Zusammenstellung der am häufigsten gestellten Fragen zur Corona- Impfung und den Antworten darauf, die Ihre Arbeit erleichtern sollen. Bleiben

The emergence of network studies in public administration research, then, was playing catch up to the structures already put in place—from complex contracting structures, to