Papers: Difference between revisions

17 bytes added ,  6 February 2023
no edit summary
No edit summary
No edit summary
Line 34: Line 34:
|[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] ||  || Attentive Language Models Beyond a Fixed-Length Context
|[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] ||  || Attentive Language Models Beyond a Fixed-Length Context
|-
|-
|[[Language Models are Few-Shot Learners]] || 2020/05/28 || [[arxiv:2005.14165]] || || [[GPT]]
|[[Language Models are Few-Shot Learners (GPT-3)]] || 2020/05/28 || [[arxiv:2005.14165]] || [[NLP]] || [[GPT-3]]
|-
|-
|[[An Image is Worth 16x16 Words]] || 2020/10/22 || [[arxiv:2010.11929]] ||  || Transformers for Image Recognition at Scale - [[Vision Transformer]] ([[ViT]])
|[[An Image is Worth 16x16 Words]] || 2020/10/22 || [[arxiv:2010.11929]] ||  || Transformers for Image Recognition at Scale - [[Vision Transformer]] ([[ViT]])