Jump to content

Papers: Difference between revisions

326 bytes added ,  10 February 2023
no edit summary
No edit summary
Line 105: Line 105:
|-
|-
|[[REALM: Retrieval-Augmented Language Model Pre-Training]] || 2020/02/10 || [[arxiv:2002.08909]]<br>[https://ai.googleblog.com/2020/08/realm-integrating-retrieval-into.html Blog Post] || [[Natural Language Processing]] || [[Google]] || [[REALM]] ([[Retrieval-Augmented Language Model Pre-Training]]) ||  
|[[REALM: Retrieval-Augmented Language Model Pre-Training]] || 2020/02/10 || [[arxiv:2002.08909]]<br>[https://ai.googleblog.com/2020/08/realm-integrating-retrieval-into.html Blog Post] || [[Natural Language Processing]] || [[Google]] || [[REALM]] ([[Retrieval-Augmented Language Model Pre-Training]]) ||  
|-
|[[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (T5)]] || 2019/10/23 || [[arxiv:1910.10683]] || [[Natural Language Processing]]<br>[https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html blog post] || [[Google]] || [[T5]] ([[Text-To-Text Transfer Transformer]]) ||
|-
|-
|[[RoBERTa: A Robustly Optimized BERT Pretraining Approach]] || 2019/07/26 || [[arxiv:1907.11692]] || [[Natural Language Processing]]<br>[https://ai.facebook.com/blog/roberta-an-optimized-method-for-pretraining-self-supervised-nlp-systems/ blog post] || [[Meta]] || [[RoBERTa]] ([[Robustly Optimized BERT Pretraining Approach]]) ||  
|[[RoBERTa: A Robustly Optimized BERT Pretraining Approach]] || 2019/07/26 || [[arxiv:1907.11692]] || [[Natural Language Processing]]<br>[https://ai.facebook.com/blog/roberta-an-optimized-method-for-pretraining-self-supervised-nlp-systems/ blog post] || [[Meta]] || [[RoBERTa]] ([[Robustly Optimized BERT Pretraining Approach]]) ||