So, I've been working on a problem from Python Programming by John Zelle. The problem is to design a basic Blackjack program that demonstrates what percentage of the time a blackjack dealer will bust given the rule that he must hit until he has greater than 17. The program was designed to show the percentage likelihood for each initial card, since the dealer often reveals his first card.
The problem I have run into is that the program seems to give me good percentages for every value except Ace and Ten, when I cross reference them with Blackjack tables.
from random import randrange
def main():
printIntro()
n = getInput()
busts = simBlackjack(n)
printSummary(n, busts)
def printIntro():
print "Hello, and welcome to Blackjack.py."
print "This program simulates the likelihood"
print "for a dealer to bust."
def getInput():
n = input("How many games do you wish to simulate: ")
return n
def simBlackjack(n):
busts = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
for b in range(10):
for i in range(n):
x = b + 1
if b == 0:
handAce = True
else: handAce = False
while x < 17:
add = randrange(1,14)
if add == 11:
add = 10
elif add == 12:
add = 10
elif add == 13:
add = 10
elif add == 1:
handAce = True
x = x + add
if handAce:
if x + 10 >= 17 and x + 10 <= 21:
x = x + 10
if x > 21:
busts[b] = busts[b] + 1
return busts
def printSummary(n, busts):
for b in range(10):
if b == 0:
print "When the initial card was Ace, the dealer busted %d times in %d games. (%0.1f%%)" % (busts[0], n, (busts[0]) / float(n) * 100)
else:
print "When the initial value was %d, the dealer busted %d times in %d games. (%0.1f%%)" % ((b + 1), busts[b], n, (busts[b]) / float(n) * 100)
if __name__ == "__main__": main()
If n = 1,000,000, I get ~ 11.5% and 21.2% respectively, which differs from the 17% and 23% that online tables keep significantly. Can anybody let me know what the problem is?
The answer was that my program's percentages are based on infinitely large deck shoes, which alter the percentage tables. The original tables I was looking at were for single decks. I found lots of sites with precisely my values after further research.
Having said that, thank you for all of your help. Certainly Jonathan Vanasco's method of approaching the problem was better and more scalable. I am a newbie so it was very educational.
I do think it is interesting that the infinitely large deck shoe affects the fringes of the probability tables the most.