You've reached the website for the Princeton CSML reading group.  We are a group of students, postdocs, and faculty members interested in learning more about the latest ideas in data analysis.  In the links above you can find the current schedule and info on previous meetings (coming soon).  

Please also view our old website/blog to get info on older meetings.

If you would like to know more about our reading group please refer to the FAQ.  If you have further questions feel free to contact Mikio Aoi and Bianca Dumitrascu

Past meetings

2017 Apr 20

Dropout as Bayesian approximation: Representing Model Uncertainty in Deep Learning

2:00pm to 3:30pm

Location: 

Green Hall, room 2N10

Authors: Gal & Ghahrimani

Presenter: Yuki

Link to paper

Abstract: Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting Read more about Dropout as Bayesian approximation: Representing Model Uncertainty in Deep Learning

2017 Apr 13

A Probabilistic Theory of Deep Learning

2:00pm to 3:30pm

Location: 

Green Hall, room 2N10

Authors: Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk

Presenter: Mikio Aoi

Link to paper

Link to a shorter paper

Abstract: A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation. For instance, visual object recognition involves the unknown object Read more about A Probabilistic Theory of Deep Learning

2017 Apr 06

Why does deep learning work so well?

2:00pm to 3:30pm

Location: 

Green Hall, room 2N10

Authors: Henry W. Lin, Max Tegmark

Presenter: Bianca Dumitrascu

Link to paper

Abstract: We show how the success of deep learning depends not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can be approximated through "cheap learning" with exponentially fewer parameters than generic ones, because they have simplifying properties Read more about Why does deep learning work so well?

2017 Mar 30

Semi-supervised Learning with Deep Generative Models

2:00pm to 3:30pm

Location: 

Green Hall, room 2N10

Authors: Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling

Presenter: Brian

Link to paper

Abstract: The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new Read more about Semi-supervised Learning with Deep Generative Models

2017 Mar 16

On the expressive power of deep learning: A tensor analysis

2:00pm to 3:30pm

Location: 

Green Hall, room 2N10

Aurthors: Cohen, Sharir, Shashua

Presenter: Adam Charles

Link to paper

Abstract: It has long been conjectured that hypotheses spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical networks than with shallow ones. Despite the vast empirical evidence supporting this belief, theoretical justifications to date are limited. In particular, they do not account for the locality, sharing and Read more about On the expressive power of deep learning: A tensor analysis

More