I am a postdoctoral researcher at the Cambridge Machine Learning Group. Beginning in Fall 2020, I will join UvA's Amsterdam Machine Learning Lab as an assistant professor. My research interests lie broadly within probabilistic machine learning and statistics. Of late, I've been working on priors for Bayesian neural networks [1, 2] and critiquing deep generative models [3].

I completed my PhD under the supervision of Padhraic Smyth at the University of California, Irvine. Previously, I've done research internships at DeepMind, Microsoft, Amazon, and Twitter. My undergraduate studies were in computer science and English literature at Lehigh University (Bethlehem, PA).

# about cv code twitter

## PUBLICATIONS## preprints / working papers
Eric Nalisnick, Jonathan Gordon, José Miguel Hernández-Lobato. Predictive Complexity Priors.
George Papamakarios*, Eric Nalisnick*, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing Flows for Probabilistic Modeling and Inference.
## conference publications
Robert Pinsler, Jonathan Gordon, Eric Nalisnick, and José Miguel Hernández-Lobato. Bayesian Batch Active Learning as Sparse Subset Approximation. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
Eric Nalisnick, José Miguel Hernández-Lobato, and Padhraic Smyth. Dropout as a Structured Shrinkage Prior. In Proceedings of the 36th International Conference on Machine Learning (ICML), 2019. [Supplementary Materials] [Code]
Eric Nalisnick*, Akihiro Matsukawa*, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Hybrid Models with Deep and Invertible Features. In Proceedings of the 36th International Conference on Machine Learning (ICML), 2019.
Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do Deep Generative Models Know What They Don't Know? In Proceedings of the 7th International Conference on Learning Representations (ICLR), 2019.
Disi Ji, Eric Nalisnick, Yu Qian, Richard Scheuermann, and Padhraic Smyth. Bayesian Trees for Automated Cytometry Data Analysis. In Proceedings of Machine Learning for Healthcare (MLHC), 2018.
Eric Nalisnick and Padhraic Smyth. Learning Priors for Invariance. In Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 2018.
Eric Nalisnick and Padhraic Smyth. Learning Approximately Objective Priors. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI), 2017.
Eric Nalisnick and Padhraic Smyth. Stick-Breaking Variational Autoencoders. In Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017. [Code] [Supplemental Materials]
Eric Nalisnick, Bhaskar Mitra, Nick Craswell, and Rich Caruana. Improving Document Ranking with Dual Word Embeddings. In Proceedings of the 25th World Wide Web Conference (WWW), Short Paper, 2016.
Eric T. Nalisnick and Henry S. Baird. Character-to-Character Sentiment Analysis in Shakespeare's Plays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), Short Paper, 2013.
Eric T. Nalisnick and Henry S. Baird. Extracting Sentiment Networks from Shakespeare's Plays. In Proceedings of the 12th International Conference on Document Analysis and Recognition (ICDAR), 2013.
## workshop papers
Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, and Balaji Lakshminarayanan. Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality. Bayesian Deep Learning, Workshop at NeurIPS 2019.
Eric Nalisnick and José Miguel Hernández-Lobato. Automatic Depth Determination for Bayesian ResNets. Bayesian Deep Learning, Workshop at NeurIPS 2018.
Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do Deep Generative Models Know What They Don't Know? Bayesian Deep Learning, Workshop at NeurIPS 2018.
Eric Nalisnick*, Akihiro Matsukawa*, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Hybrid Models with Deep and Invertible Features. Bayesian Deep Learning, Workshop at NeurIPS 2018.
Oleg Rybakov et al. The Effectiveness of a Two-Layer Neural Network for Recommendations. Workshop Track, ICLR 2018.
Disi Ji, Eric Nalisnick, and Padhraic Smyth. Mondrian Processes for Flow Cytometry Analysis. Machine Learning for Health, Workshop at NeurIPS 2017.
Eric Nalisnick and Padhraic Smyth. Variational Inference with Stein Mixtures. Advances in Approximate Bayesian Inference, Workshop at NeurIPS 2017.
Eric Nalisnick and Padhraic Smyth. The Amortized Bootstrap. Implicit Models, Workshop at ICML 2017. [Oral Presentation]
Eric Nalisnick and Padhraic Smyth. Variational Reference Priors. Workshop Track, ICLR 2017.
Eric Nalisnick, Lars Hertel, and Padhraic Smyth. Approximate Inference for Deep Latent Gaussian Mixtures. Bayesian Deep Learning, Workshop at NeurIPS 2016.
Eric Nalisnick and Padhraic Smyth. Nonparametric Deep Generative Models with Stick-Breaking Priors. Data-Efficient Machine Learning, Workshop at ICML 2016. [Oral Presentation]
Jihyun Park, Meg Blume-Kohout, Ralf Krestel, Eric Nalisnick, and Padhraic Smyth. Analyzing NIH Funding Patterns over Time with Statistical Text Analysis. Scholarly Big Data: AI Perspectives, Challenges, and Ideas, Workshop at AAAI 2016.
## TALKS## TEACHING## uci data science workshops
Predictive Modeling with Python.
Advanced Predictive Modeling with Python. ## notesStochastic Backprop through Mixture Densities.Stein Variational Gradient Descent. Generative Adversarial Networks (w/ Tensorflow basics). Operator Variational Inference. Mondrian Processes. The Beta Divergence. Learning Model Reparametrizations. |