Sunday, November 15, 2015

CP7002 PROBABILISTIC REASONING SYSTEMS

CP7002    PROBABILISTIC REASONING SYSTEMS

UNIT I           REPRESENTATION

Probability Theory, Graphs, Bayesian network representation: Bayes networks, Independence in graphs – Undirected graphical models: Parameterization, Markov Network independencies –   Conditional Bayesian networks.
   
UNIT II        TEMPLATE BASED REPRESENTATION

Temporal models (Dynamic Bayesian networks , Hidden Markov Models) – Directed probabilistic models for object-relational domains – Inference in temporal models:  Kalman filters.  

UNIT III        INFERENCE

Exact inference: Variable elimination – Exact inference: Clique trees (Junction trees) – Approximate inference: Forward sampling, Importance sampling, MCMC –  MAP inference: Variable elimination for MAP, Max-product in clique trees. 

UNIT IV       LEARNING

Learning graphical models – Parameter estimation: maximum-likelihood estimation, MLE for Bayesian networks, Bayesian parameter estimation – Structure learning in Bayesian networks: Constraint based, structure scores, structure search – Partially observed data: Parameter estimation, Learning models with hidden variables – Learning undirected models: Maximum likelihood 

UNIT V      ACTIONS AND DECISIONS

Causality – Utilities and decisions – Structured decision problems  

REFERENCES: 

1. Daphne Koller and Nir Friedman, “Probabilistic Graphical Models: Principles and Techniques”, MIT Press, 2009. 
2. David Barber, “Bayesian Reasoning and Machine Learning”, Cambridge University Press, 2012. 
3. Adnan Darwiche, “Modeling and Reasoning with Bayesian networks”, Cambridge University Press, 2009. 
4. Kevin P. Murphy, “Machine Learning: A Probabilistic Perspective”, MIT Press, 2012. 
5. Stuart Russel and Peter Norvig, “Artificial Intelligence:  A Modern Approach”, Third Edition, Prentice Hall, 2009.   


No comments:

Post a Comment