We expect this project to advance our understanding as well as the robustness and interpretability of probabilistic deep generative models. An improved analytical understanding of probabilistic models such as VAEs can help us to determine their limitations and push their boundaries. We are particularly interested in going beyond traditional statistical representations by moving towards representations that support notions of distribution change, intervention, and other forms of robustness. This will ultimately allow us to identify existing and novel mechanisms that lead to learning more versatile representations supporting artificial intelligence. It may also enable better interpretability and allow more meaningful interaction with humans.
We are looking for a qualified postdoctoral researcher to join us in advancing the understanding of deep probabilistic models for robust learning.
What we are looking for
Ideally, you bring along a strong research background in machine learning, statistics, probabilistic modeling, artificial intelligence or a related field. You hold (or expect to shortly receive) a PhD in computer science, statistics, electrical engineering, mathematics or a related field and your research ability... More
Empirical Inference Tübingen AI Center