Cross Validated Asked by Vin on January 13, 2021
I’m reading Building Intelligent Interactive Tutors (Woolf, 2009) on student models for ITSs. On page 261, the author presents an example for a simple Bayesian network ($S rightarrow E$), where $S$ is the unobserved skill variable (with states: $0$ = knows; $1$ = doesn’t know) and $E$ is the observed evidence variable (with states: $0$ = incorrect; $1$ = correct).
The author goes on to compute the posterior probability $P(S|E)$ for $S=1$ through Bayes’ rule, assuming the following probabilities:
He reaches the following answers:
Then, he claims the revised posterior probability for $S=1$ is approximately $0.78$ for the first case and approximately $0.06$ for the second case. Where do these posterior probability values come from? What do they represent?
I’ve coded the example in Python (using the pgmpy
library) and got the same values for $P(S=1|E=1)$ and $P(S=1|E=0)$. Here’s the code and its output (NewtonsLaw
is $S$ and Problem023
is $E$):
import numpy as np
from pgmpy.models import BayesianModel
from pgmpy.estimators import BayesianEstimator
from pgmpy.inference import VariableElimination
from pgmpy.factors.discrete import TabularCPD
# Bayesian network structure
model = BayesianModel([('NewtonsLaw', 'Problem023')])
# CPDs
cpd_problem_023 = TabularCPD('Problem023', 2, [[0.95, 0.2],
[0.05, 0.8]],
evidence=['NewtonsLaw'], evidence_card=[2])
cpd_newtons_law = TabularCPD('NewtonsLaw', 2, [[0.5, 0.5]])
# Add probabilities to model
model.add_cpds(cpd_problem_023, cpd_newtons_law)
model.check_model()
# Query
inference = VariableElimination(model)
posterior_newtons_law_right = inference.query(['NewtonsLaw'], evidence={'Problem023': 1})
print(posterior_newtons_law_right['NewtonsLaw'])
posterior_newtons_law_wrong = inference.query(['NewtonsLaw'], evidence={'Problem023': 0})
print(posterior_newtons_law_wrong['NewtonsLaw'])
Output:
+--------------+-------------------+
| NewtonsLaw | phi(NewtonsLaw) |
|--------------+-------------------|
| NewtonsLaw_0 | 0.0588 |
| NewtonsLaw_1 | 0.9412 |
+--------------+-------------------+
+--------------+-------------------+
| NewtonsLaw | phi(NewtonsLaw) |
|--------------+-------------------|
| NewtonsLaw_0 | 0.8261 |
| NewtonsLaw_1 | 0.1739 |
+--------------+-------------------+
If the answer still matters, I think to update the posterior probabilities (revised posteriors), one must conduct parameter learning as explained here: http://pgmpy.org/models.html
You did good inferencing with pgmpy, but you cannot derive revised posteriors with inferencing, rather use learning with model.fit(...)
.
Answered by Dyks Vail on January 13, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP