Data Science Asked on September 29, 2021
I’m trying to build a simulation for this question:
"There are 50 cards of 5 different colors. Each color has cards
numbered between 1 to 10. You pick 2 cards at random. What is the
probability that they are not of same color and also not of same
number?"
(From Glassdoor)
I should have a result like "73%" but with my code I get (consistently) "72%" or "71.8%".
Here is my code:
import numpy as np
# Building a deck of 10 cards for each of the 5 colors
cards = np.array([c+str(n) for c in ("A", "B", "C", "D", "E") for n in range(1, 11)])
def random_cards_differ():
"""Returns True if two random cards differ"""
a, b = np.random.choice(cards, 2, replace=False)
if a[0] != b[0] and a[1] != b[1]:
return True
else:
return False
nb_success = 0
nb_tries = 100000
for i in range(nb_tries):
if random_cards_differ():
nb_success += 1
print(nb_success / nb_tries)
>>> 0.71892
Is this normal? Is there a mistake in my code or is it a "random gotcha" caused by some seed or something else?
There's an error in your code:
cards = np.array([c+str(n) for c in ("A", "B", "C", "D", "E") for n in range(1, 11)])
will produce "A10" and "A1" among other values, and
if a[0] != b[0] and a[1] != b[1]:
will return true
when a=A10
and b=A1
, for example. This is why you're probably consistently underestimating the number of differences.
An easy fix would be to use:
cards = np.array([c+str(n) for c in ("A", "B", "C", "D", "E") for n in range(0, 10)])
instead, which is more readable anyway. But if I were doing this, I might use itertools.product
.
Even after this fix it's normal to not always get exactly the theoretical value, but it's bad if there's a bias (i.e. consistent under- or over- estimation).
Correct answer by bogovicj on September 29, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP