We show that deep neural models can describe common sense physics in a valid and sufficient way that is also generalizable. Our ESPRIT framework is trained on a new dataset with physics simulations and descriptions that we collected and have open-sourced.
Summary We investigate NVIDIA’s Triton (TensorRT) Inference Server as a way of hosting Transformer Language Models. The blog is roughly divided into two parts: (i) instructions for setting up your own inference server,…
In our study, we demonstrate that an artificial intelligence (AI) model can learn the language of biology in order to generate proteins in a controllable fashion.
Our graph-based trainable retriever-reader framework retrieves evidence paragraphs from Wikipedia to answer open-domain questions. We show state-of-the-art performance on HotpotQA, SQuAD Open, and Natural Questions Open without any architectural changes.
Learn about the ethical implications of voice for business and how to make them an operational and strategic priority now—before you’re too far down the path.
Many NLP applications today deploy state-of-the-art deep neural networks that are essentially black-boxes. One of the goals of Explainable AI (XAI) is to have AI models reveal why and how they make their…
Large-scale language models show promising text generation capabilities, but users cannot control their generated content, style or train them for multiple supervised language generation tasks.