Natasha Jaques

Affective Computing
  • Research Assistant

PhD candidate working on  improving deep learning and AI agents by building in forms of affective and social intelligence. My past work has investigated methods for improving generalization of machine learning models via intrinsic motivation, transfer learning, multi-task learning, and learning from human preferences.  I've interned with DeepMind and Google Brain, and was an OpenAI Scholars mentor.  Experienced in traditional machine learning,  deep learning, kernel methods, Bayesian non-parametrics, causal inference, and reinforcement learning.

My favourite past projects have included: 

- Developing a unified method for promoting cooperation and communication among in multi-agent reinforcement learning (RL) by creating an intrinsic reward based on assessing causal influence between agents.

- Improving deep generative models by using human facial expression responses to samples from the model as a training signal.

 - Effectively combining supervised learning and RL to train generative sequence models.

- Using multi-task learning techniques to personalize machine learnin… View full description

PhD candidate working on  improving deep learning and AI agents by building in forms of affective and social intelligence. My past work has investigated methods for improving generalization of machine learning models via intrinsic motivation, transfer learning, multi-task learning, and learning from human preferences.  I've interned with DeepMind and Google Brain, and was an OpenAI Scholars mentor.  Experienced in traditional machine learning,  deep learning, kernel methods, Bayesian non-parametrics, causal inference, and reinforcement learning.

My favourite past projects have included: 

- Developing a unified method for promoting cooperation and communication among in multi-agent reinforcement learning (RL) by creating an intrinsic reward based on assessing causal influence between agents.

- Improving deep generative models by using human facial expression responses to samples from the model as a training signal.

 - Effectively combining supervised learning and RL to train generative sequence models.

- Using multi-task learning techniques to personalize machine learning models and improve accuracy in predicting next day stress, happiness and health.