Papers: Difference between revisions

36 bytes removed ,  5 February 2023
no edit summary
No edit summary
No edit summary
Line 13: Line 13:
|[[Language Models are Few-Shot Learners]] || 2020/05/28 || [[arxiv:2005.14165]] || [[GPT]]
|[[Language Models are Few-Shot Learners]] || 2020/05/28 || [[arxiv:2005.14165]] || [[GPT]]
|-
|-
|'''[[An Image is Worth 16x16 Words]]''' || 2020/10/22 || [[arxiv:2010.11929]] || Transformers for Image Recognition at Scale - [[Vision Transformer]] ([[ViT]])
|[[An Image is Worth 16x16 Words]] || 2020/10/22 || [[arxiv:2010.11929]] || Transformers for Image Recognition at Scale - [[Vision Transformer]] ([[ViT]])
|-
|-
|'''[[OpenAI CLIP]]''' || 2021/02/26 || [[arxiv:2103.00020]]<br>[https://openai.com/blog/clip/ OpenAI Blog] || Learning Transferable Visual Models From Natural Language Supervision
|[[OpenAI CLIP]] || 2021/02/26 || [[arxiv:2103.00020]]<br>[https://openai.com/blog/clip/ OpenAI Blog] || Learning Transferable Visual Models From Natural Language Supervision
|-
|-
|'''[[MobileViT]]''' || 2021/10/05 || [[arxiv:2110.02178]] || Light-weight, General-purpose, and Mobile-friendly Vision Transformer
|[[MobileViT]] || 2021/10/05 || [[arxiv:2110.02178]] || Light-weight, General-purpose, and Mobile-friendly Vision Transformer
|-
|-
|'''[[Block-Recurrent Transformers]]''' || 2022/03/11 || [[arxiv:2203.07852]] ||  
|[[Block-Recurrent Transformers]] || 2022/03/11 || [[arxiv:2203.07852]] ||  
|-
|-
|'''[[Memorizing Transformers]]''' || 2022/03/16 ||[[arxiv:2203.08913]] ||  
|[[Memorizing Transformers]] || 2022/03/16 ||[[arxiv:2203.08913]] ||  
|-
|-
|'''[[STaR]]''' || 2022/03/28 || [[arxiv:2203.14465]] || Bootstrapping Reasoning With Reasoning
|[[STaR]] || 2022/03/28 || [[arxiv:2203.14465]] || Bootstrapping Reasoning With Reasoning
|-
|-
|}
|}