ep50 (ICLR): ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

Leading NLP Ninja Podcast

ICLR 2020より,Replaced Token Detectionタスクによる事前学習によってGLUEとSQuADでSOTAを獲得したStanford x Googleのモデルを解説しました.

今回紹介した記事はこちらのissueで解説しています.
https://github.com/jojonki/arXivNotes/issues/391

サポーターも募集中です.
https://www.patreon.com/jojonki

--- Support this podcast: https://podcasters.spotify.com/pod/show/lnlp-ninja/support

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada