NLP/AI/Statistics

[Slideshare] ELECTRA: Pre-Training Text Encoders As Discriminators Rather Than Generators 본문

Paper Slide

[Slideshare] ELECTRA: Pre-Training Text Encoders As Discriminators Rather Than Generators

Danbi Cho 2021. 5. 4. 11:26

Title: Electra: pre-training text encoders as discriminators rather than generators

Authors: Clark, K., Luong, M.T., Le, Q.V., and Manning, C.D.

Published: In arXiv preprint arXiv:2003.10555, 2020.

Paper: arxiv.org/abs/2003.10555

 

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, the

arxiv.org

 

Slideshare: www.slideshare.net/DanbiCho2/electrapretraining-text-encoders-as-discriminators-rather-than-generators

 

ELECTRA_Pretraining Text Encoders as Discriminators rather than Gener…

The presentation explains the ELECTRA model. ELECTRA means 'Efficiently Learning an Encoder that Classifies Token Replacements Accurately'. This paper proposes…

www.slideshare.net

 

Comments