126 episodes

Welcome to the NLP highlights podcast, where we invite researchers to talk about their work in various areas in natural language processing. The hosts are the members of the AllenNLP team at Allen Institute for AI. All views expressed belong to the hosts and guests and do not represent their employers.

NLP Highlights Allen Institute for Artificial Intelligence

    • Science

Welcome to the NLP highlights podcast, where we invite researchers to talk about their work in various areas in natural language processing. The hosts are the members of the AllenNLP team at Allen Institute for AI. All views expressed belong to the hosts and guests and do not represent their employers.

    125 - VQA for Real Users, with Danna Gurari

    125 - VQA for Real Users, with Danna Gurari

    How can we build Visual Question Answering systems for real users? For this episode, we chatted with Danna Gurari, about her work in building datasets and models towards VQA for people who are blind. We talked about the differences between the existing datasets, and Vizwiz, a dataset built by Gurari et al., and the resulting algorithmic changes. We also discussed the unsolved challenges in this field, and the new tasks they result in.

    Danna Gurari is an Assistant Professor as well as Founding Director of the Image and Video Computing group in the School of Information at University of Texas at Austin (UT-Austin).

    Vizwiz project page: https://vizwiz.org/

    The hosts for this episode are Ana Marasović and Pradeep Dasigi.

    • 42 min
    124 - Semantic Machines and Task-Oriented Dialog, with Jayant Krishnamurthy and Hao Fang

    124 - Semantic Machines and Task-Oriented Dialog, with Jayant Krishnamurthy and Hao Fang

    We invited Jayant Krishnamurthy and Hao Fang, researchers at Microsoft Semantic Machines to discuss their platform for building task-oriented dialog systems, and their recent TACL paper on the topic. The paper introduces a new formalism for task-oriented dialog to effectively handle references and revisions in complex dialog, and a large realistic dataset that uses this formalism.

    Leaderboard associated with the dataset: https://microsoft.github.io/task_oriented_dialogue_as_dataflow_synthesis/
    Jayant's Twitter handle: https://twitter.com/jayantkrish
    Hao's Twitter handle: https://twitter.com/hfang90

    • 45 min
    123 - Robust NLP, with Robin Jia

    123 - Robust NLP, with Robin Jia

    In this episode, Robin Jia talks about how to build robust NLP systems. We discuss the different senses in which a system can be robust, reasons to care about system robustness, and the challenges involved in evaluating robustness of NLP models. We talk about how to build certifiably robust models through interval bound propagation and discrete encoding functions, as well as how to modify data collection procedures through active learning for more robust model development.

    Robin Jia is currently a visiting researcher at Facebook AI Research, and will be an assistant professor in the Department of Computer Science at the University of Southern California starting Fall 2021.

    • 47 min
    122 - Statutory Reasoning in Tax Law, with Nils Holzenberger

    122 - Statutory Reasoning in Tax Law, with Nils Holzenberger

    We invited Nils Holzenberger, a PhD student at JHU to talk about a dataset involving statutory reasoning in tax law Holzenberger et al. released recently. This dataset includes difficult textual entailment and question answering problems that involve reasoning about how sections in tax law are applicable to specific cases. They also released a Prolog solver that fully solves the problems, and show that learned models using dense representations of text perform poorly. We discussed why this is the case, and how one can train models to solve these challenges.

    Project webpage: https://nlp.jhu.edu/law/

    • 46 min
    121 - Language and the Brain, with Alona Fyshe

    121 - Language and the Brain, with Alona Fyshe

    We invited Alona Fyshe to talk about the link between NLP and the human brain. We began by talking about what we currently know about the connection between representations used in NLP and representations recorded in the brain. We also discussed how different brain imaging techniques compare to each other. We then dove into experiments investigating how hidden states of LSTM language models correlate with EEG brain imaging data on three types of language inputs: well-formed grammatical sentences, pseudo-word sentences preserving syntax but not semantics, and word-lists preserving neither. We talk about the kinds of conclusions that can be drawn from these correlations and conclude by discussing avenues for future work.

    • 42 min
    120 - Evaluation of Text Generation, with Asli Celikyilmaz

    120 - Evaluation of Text Generation, with Asli Celikyilmaz

    We invited Asli Celikyilmaz for this episode to talk about evaluation of text generation systems. We discussed the challenges in evaluating generated text, and covered human and automated metrics, with a discussion of recent developments in learning metrics. We also talked about some open research questions, including the difficulties in evaluating factual correctness of generated text.

    Asli Celikyilmaz is a Principal Researcher at Microsoft Research.
    Link to a survey co-authored by Asli on this topic: https://arxiv.org/abs/2006.14799

    • 55 min

Top Podcasts In Science

Listeners Also Subscribed To