118 episodes

Welcome to the NLP highlights podcast, where we invite researchers to talk about their work in various areas in natural language processing. The hosts are Matt Gardner, Pradeep Dasigi (research scientists at the Allen Institute for Artificial Intelligence) and Waleed Ammar (research scientist at Google). All views expressed belong to the hosts and guests and do not represent their employers.

NLP Highlights Allen Institute for Artificial Intelligence

    • Science
    • 4.6, 18 Ratings

Welcome to the NLP highlights podcast, where we invite researchers to talk about their work in various areas in natural language processing. The hosts are Matt Gardner, Pradeep Dasigi (research scientists at the Allen Institute for Artificial Intelligence) and Waleed Ammar (research scientist at Google). All views expressed belong to the hosts and guests and do not represent their employers.

    117 - Interpreting NLP Model Predictions, with Sameer Singh

    117 - Interpreting NLP Model Predictions, with Sameer Singh

    We interviewed Sameer Singh for this episode, and discussed an overview of recent work in interpreting NLP model predictions, particularly instance-level interpretations. We started out by talking about why it is important to interpret model outputs and why it is a hard problem. We then dove into the details of three kinds of interpretation techniques: attribution based methods, interpretation using influence functions, and generating explanations. Towards the end, we spent some time discussing how explanations of model behavior can be evaluated, and some limitations and potential concerns in evaluation methods.

    Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine.
    Some of the techniques discussed in this episode have been implemented in the AllenNLP Interpret framework (details and demo here: https://allennlp.org/interpret).

    • 56 min
    116 - Grounded Language Understanding, with Yonatan Bisk

    116 - Grounded Language Understanding, with Yonatan Bisk

    We invited Yonatan Bisk to talk about grounded language understanding. We started off by discussing an overview of the topic, its research goals, and the the challenges involved. In the latter half of the conversation, we talked about ALFRED (Shridhar et al., 2019), a grounded instruction following benchmark that simulates training a robot butler. The current best models built for this benchmark perform very poorly compared to humans. We discussed why that might be, and what could be done to improve their performance.

    Yonatan Bisk is currently an assistant professor at Language Technologies Institute at Carnegie Mellon University. The data and the leaderboard for ALFRED can be accessed here: https://askforalfred.com/.

    • 59 min
    115 - AllenNLP, interviewing Matt Gardner

    115 - AllenNLP, interviewing Matt Gardner

    In this special episode, Carissa Schoenick, a program manager and communications director at AI2 interviewed Matt Gardner about AllenNLP. We chatted about the origins of AllenNLP, the early challenges in building it, and the design decisions behind the library. Given the release of AllenNLP 1.0 this week, we asked Matt what users can expect from the new release, what improvements the AllenNLP team is working on for the future versions.

    • 33 min
    114 - Behavioral Testing of NLP Models, with Marco Tulio Ribeiro

    114 - Behavioral Testing of NLP Models, with Marco Tulio Ribeiro

    We invited Marco Tulio Ribeiro, a Senior Researcher at Microsoft, to talk about evaluating NLP models using behavioral testing, a framework borrowed from Software Engineering. Marco describes three kinds of black-box tests the check whether NLP models satisfy certain necessary conditions. While breaking the standard IID assumption, this framework presents a way to evaluate whether NLP systems are ready for real-world use. We also discuss what capabilities can be tested using this framework, how one can come up with good tests, and the need for an evolving set of behavioral tests for NLP systems.

    Marco’s homepage: https://homes.cs.washington.edu/~marcotcr/

    • 43 min
    113 - Managing Industry Research Teams, with Fernando Pereira

    113 - Managing Industry Research Teams, with Fernando Pereira

    We invited Fernando Pereira, a VP and Distinguished Engineer at Google, where he leads NLU and ML research, to talk about managing NLP research teams in industry. Topics we discussed include prioritizing research against product development and effective collaboration with product teams, dealing with potential research interest mismatch between individuals and the company, managing publications, hiring new researchers, and diversity and inclusion.

    • 42 min
    112 - Alignment of Multilingual Contextual Representations, with Steven Cao

    112 - Alignment of Multilingual Contextual Representations, with Steven Cao

    We invited Steven Cao to talk about his paper on multilingual alignment of contextual word embeddings. We started by discussing how multilingual transformers work in general, and then focus on Steven’s work on aligning word representations. The core idea is to start from a list of words automatically aligned from parallel corpora and to ensure the representations of the aligned words are similar to each other while not moving too far away from their original representations. We discussed the experiments on the XNLI dataset in the paper, analysis, and the decision to do the alignment at word level and compare it to other possibilities such as aligning word pieces or higher level encoded representations in transformers.

    Paper: https://openreview.net/forum?id=r1xCMyBtPS
    Steven Cao’s webpage: https://stevenxcao.github.io/

    • 33 min

Customer Reviews

4.6 out of 5
18 Ratings

18 Ratings

SambersCurtis ,

The only NLP Podcast

It’s nice that they are covering NLP. I’m not able to find anyone else that is doing that specifically. However, the hosts always try to punch holes in the guests theories. So they are always on the defensive. It makes for really unpleasant discussions.

Top Podcasts In Science

Listeners Also Subscribed To