Part of what makes Deep Learning work is having a lot of data about your problem, whether that be labelled data, points from a data distribution that you are learning to generate, or in the case of reinforcement learning, training episodes. Having lots of data isn’t something that you should always take for granted. Sample efficiency is all about getting more out of the data that you already have. One way that we can get better sample efficiency is by re-framing problems that have quite complex data into more abstract problems which have simpler data. If you can generalize problems in this way, you can learn how to solve them with much less data.
Some other key ideas that I think are important in this field are disentanglement, planning and hindsight.
I’m also interested in the overlap between law, policy and AI, though this isn’t planned as a part of my PhD.
Prior to my PhD I wrote my Master Thesis for Curious AI on applying Neural Model Predictive Control to the Tennessee Eastman Problem, an industrial process control benchmark.
Prior to my Master Thesis I was on the Doctoral Track at Aalto, where I worked as a research assistant. At the Secure Systems Group, I worked on the experiments for a paper on imitating writing styles using combinatorial paraphrasing guided by adversarial machine learning. I also did an exchange at EPFL where I worked in the Data Science Lab on a graph analysis project for a fork of Wikipedia.