Search code examples
pythonclassobjectmachine-learningsupervised-learning

I try to create different objects of the same type for Learning Algorithms in Prisoner's Dilemma but python confuses them


Obviously, I imagine that it is not Python that is confusing them. I Just can't figure out the source of my bug.

Here is my code:

import numpy as np

class EW_learner:
    #p is the probability distribution (from which we chose action C or D)
    #pp are the potential payoffs (if 1 action was taken consistently)
    #s is my real payoff or score (i.e. negated total prison years)
    ##pp[0] and action = 0 is cooperate (C) => (0=C)
    ##pp[1] and action = 1 is defect (D) => (1=D)
    #ps is a list of the probability distribution at each round
    #ss is a list of the scores at each round
    def __init__(self, lr, p=[0.5,0.5], pp=[0,0], s=0, ps=[], ss=[]):
        self.lr = lr
        self.p = p
        self.pp = pp
        self.s = s
        self.ps = ps
        self.ss = ss

    #Return an action (C or D) => (0 or 1)
    def action(self):
        return int(np.random.choice(2, 1, p=self.p))

    def update(self, my_act, adv_act):
        if (my_act == 0) and (adv_act == 0):
            self.s -= 3
            self.pp[0] -= 3
            self.pp[1] -= 5
        elif (my_act == 1) and (adv_act == 0):
            self.s -= 5
            self.pp[0] -= 3
            self.pp[1] -= 5
        elif (my_act == 0) and (adv_act == 1):
            #self.s -= 0
            #self.pp[0] -= 0
            self.pp[1] -= 1
        elif (my_act == 1) and (adv_act == 1):
            self.s -= 1
            #self.pp[0] -= 0
            self.pp[1] -= 1

        self.p[0] = np.power(1.0+self.lr, self.pp[0])
        self.p[1] = 1 - self.p[0]

    def collect_data(self):
        (self.ps).append(self.p)
        (self.ss).append(self.s)

def play(p1, p2, n_rounds):
    for r in range(n_rounds):
        act1 = p1.action()
        act2 = p2.action()
        p1.update(act1, act2)
        p2.update(act2, act1)
        p1.collect_data()
        p2.collect_data()
    print('P1 Score: ' + str(p1.s) + ', P2 Score: ' + str(p2.s))
    print('P1 ProbDist: ' + str(p1.p) + ', P2 ProbDist: ' +  str(p2.p))
    return p1.ss, p2.ss, p1.ps, p2.ps

lucas = EW_learner(0.1)
paula = EW_learner(0.9)
sim = play(lucas, paula, 10)

And when I call sim[0] which should basically output p1.ss which is lucas' scores at each round, it outputs this:

in [85]: sim[0]

out[85]: 
[-5,
 0,
 -6,
 -1,
 -6,
 -6,
 -7,
 -7,
 -8,
 -8,
 -9,
 -9,
 -10,
 -10,
 -11,
 -11,
 -12,
 -12,
 -13,
 -13]

which is of length 20 (which makes no sense since there are 10 rounds). Also, this output seems to be the exact same output as sim[1] which should be paula's scores at each round. And for some reason, they merged into the same lists and alternate lucas' score and paula's score. I am aware I could easily split the list using %2 but I'd rather understand the bug.


Solution

  • Do like this:

    import numpy as np
    
    class EW_learner():
        #p is the probability distribution (from which we chose action C or D)
        #pp are the potential payoffs (if 1 action was taken consistently)
        #s is my real payoff or score (i.e. negated total prison years)
        ##pp[0] and action = 0 is cooperate (C) => (0=C)
        ##pp[1] and action = 1 is defect (D) => (1=D)
        #ps is a list of the probability distribution at each round
        #ss is a list of the scores at each round
        def __init__(self, lr, p=[0.5,0.5], pp=[0,0], s=0):
            self.lr = lr
            self.p = p
            self.pp = pp
            self.s = s
            self.ps = []
            self.ss = []
    

    sim[0] results in:

    [0, -5, -6, -7, -8, -9, -10, -11, -12, -13]