This blog accompanies the interactive demo and paper! Dive deeper into the underlying simulation code and simulation card. In this blog post, we focus on the high-level features and ethical considerations of AI…
TL; DR: We propose a new vision-language representation learning framework which achieves state-of-the-art performance by first aligning the unimodal representations before fusing them. Vision and language are two of the most fundamental channels…
With data rapidly being generated by millions of people, it's not feasible to label all of it. Learn about the recent advancements in ML for how to train vision models with unlabelled data using self-supervised learning.
How many emails and working-related conversations do you have every day? The average office worker receives about 121 emails daily and uncountable messages on platforms such as Slack, Team, or iMessage. With the…
This year marks the 9th annual conference on International Conference on Learning Representations (ICLR) taking place in a fully virtual format from May 4th through May 8th, 2021. ICLR is a premier academic…
The empirical success of deep learning has posed significant challenges to machine learning theory: Why can we efficiently train neural networks with gradient descent despite its highly non-convex optimization landscape? Why do over-parametrized…
TL;DR: We propose controllable counterfactuals (CoCo) to evaluate dialogue state tracking (DST) models on novel scenarios, which results in significant performance drop of up to 30.8% for state-of-the-art DST models. Using CoCo for…