Title:

Bayesian Models for Machine Learning (in English)

Code:BAYa
Ac.Year:2019/2020
Sem:Winter
Curriculums:
ProgrammeField/
Specialization
YearDuty
IT-MSC-2MGMe-Compulsory-Elective - group M
MITAINADE-Elective
MITAINBIO-Elective
MITAINCPS-Elective
MITAINEMB-Elective
MITAINGRI-Elective
MITAINHPC-Elective
MITAINIDE-Elective
MITAINISD-Elective
MITAINISY-Elective
MITAINMAL-Compulsory
MITAINMAT-Elective
MITAINNET-Elective
MITAINSEC-Elective
MITAINSEN-Elective
MITAINSPE-Elective
MITAINVER-Elective
MITAINVIZ-Elective
Language of Instruction:English
News:Em., 2019-07-17, an information for Erasmus+ students:
BAYa is a highly demanding, mathematically-oriented course. Solid background knowledge in the basics of machine learning (at least at the level of FIT's IKR course, see http://www.fit.vutbr.cz/study/course-l.php.en?id=12754) and statistics is required for its successful completion.
Credits:5
Completion:examination
Type of
instruction:
Hour/semLecturesSeminar
Exercises
Laboratory
Exercises
Computer
Exercises
Other
Hours:26130013
 ExamsTestsExercisesLaboratoriesOther
Points:51240025
Guarantor:Burget Lukáš, doc. Ing., Ph.D. (DCGM)
Deputy guarantor:Černocký Jan, doc. Dr. Ing. (DCGM)
Lecturer:Burget Lukáš, doc. Ing., Ph.D. (DCGM)
Instructor:Baskar Murali K. (DCGM)
Diez Sánchez Mireia, M.Sc., Ph.D. (DCGM)
Ondel Lucas, Mgr. (DCGM)
Faculty:Faculty of Information Technology BUT
Department:Department of Computer Graphics and Multimedia FIT BUT
Schedule:
DayLessonWeekRoomStartEndLect.Gr.Groups
WedlecturelecturesG202 17:0018:501EIT 1MIT 2EIT 2MIT INTE xx
WedexerciselecturesG202 19:0019:501EIT 1MIT 2EIT 2MIT INTE xx
 
Learning objectives:
  To demonstrate the limitations of Deep Neural Nets (DNN) that have become a very popular machine learning tool successful in many areas, but that excel only when sufficient amount of well annotated training data is available. To present Bayesian models (BMs) allowing to make robust decisions even in cases of scarce training data as they take into account the uncertainty in the model parameter estimates. To introduce the concept of latent variables making BMs modular (i.e. more complex models can be built out of simpler ones) and well suitable for cases with missing data (e.g. unsupervised learning when annotations are missing). To introduce basic skills and intuitions about the BMs and to develop more advanced topics such as: approximate inference methods necessary for more complex models, infinite mixture models based on non-parametric BMs, or Auto-Encoding Variational Bayes. The course is taught in English.
Description:
  Probability theory and probability distributions, Bayesian Inference, Inference in Bayesian models with conjugate priors, Inference in Bayesian Networks, Expectation-Maximization algorithm, Approximate inference in Bayesian models using Gibbs sampling, Variational Bayes inference, Stochastic VB, Infinite mixture models, Dirichlet Process, Chinese Restaurant Process, Pitman-Yor Process for Language modeling, Expectation propagation, Gaussian Process, Auto-Encoding Variational Bayes, Practical applications of Bayesian inference
Why is the course taught:
  Nothing in life is given for sure. The uncertainty is accompanying us also in machine learning, classification and recognition - in the basic courses, youll learn how to train parameters of Gaussian models or neural networks. But are they correct? Can we be sure about the result? How about if the model is deployed on data different from the training ones? The BAY course will teach you not to trust anything and express everything as probability distributions rather than hard numbers. You will enjoy lots of maths, but if you are serious about machine learning, you cant consider it just as "connecting black boxes". You need a solid mathematical background.
Syllabus of lectures:
 
  1. Probability theory and probability distributions 
  2. Bayesian Inference (priors, uncertainty of the parameter estimates, posterior predictive probability) 
  3. Inference in Bayesian models with conjugate priors 
  4. Inference in Bayesian Networks (loopy belief propagation) 
  5. Expectation-Maximization algorithm (with application to Gaussian Mixture Model) 
  6. Approximate inference in Bayesian models using Gibbs sampling 
  7. Variational Bayes inference, Stochastic VB 
  8. Infinite mixture models, Dirichlet Process, Chinese Restaurant Process 
  9. Pitman-Yor Process for Language modeling 
  10. Expectation propagation 
  11. Gaussian Process 
  12. Auto-Encoding Variational Bayes 
  13. Practical applications of Bayesian inference
Syllabus of numerical exercises:
 Lectures will be immediately followed by demonstration exercises where examples in Python will be presented. Code and data of all demonstrations will be made available to the students and will constitute the basis for the project.
Syllabus - others, projects and individual work of students:
 The project will follow on the demonstration exercises and will make the student work on provided (simulated or real) data. The students will work in teams in "evaluation" mode and present their results at the final lecture/exercise.
Fundamental literature:
 
  • C. Bishop: Pattern Recognition and Machine Learning, Springer, 2006 
  • S. J. Gershman and D.M. Blei: A tutorial on Bayesian nonparametric models, Journal of Mathematical Psychology, 2012. 
  • P Orbanz: Tutorials on Bayesian Nonparametrics: http://stat.columbia.edu/~porbanz/npb-tutorial.html 
  • D.P. Kingma, M. Welling: Auto-Encoding Variational Bayes, ICLR, Banff, 2014
Study literature:
 
  • C. Bishop: Pattern Recognition and Machine Learning, Springer, 2006 
  • S. J. Gershman and D.M. Blei: A tutorial on Bayesian nonparametric models, Journal of Mathematical Psychology, 2012. 
  • P Orbanz: Tutorials on Bayesian Nonparametrics: http://stat.columbia.edu/~porbanz/npb-tutorial.html 
  • D.P. Kingma, M. Welling: Auto-Encoding Variational Bayes, ICLR, Banff, 2014
Progress assessment:
  
  • Half-semestral exam (24pts)  
  • Submission and presentation of project (25pts) 
  • Semestral exam, 51pts.
 

Your IPv4 address: 54.161.118.57