Papers: Difference between revisions

167 bytes added ,  6 February 2023
no edit summary
No edit summary
No edit summary
Line 21: Line 21:
|[[Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet)]] || 2014/09/04 || [[arxiv:409.1556]] ||  ||  || [[VGGNet]]
|[[Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet)]] || 2014/09/04 || [[arxiv:409.1556]] ||  ||  || [[VGGNet]]
|-
|-
|[[Sequence to Sequence Learning with Neural Networks (Seq2Seq)]] || 2014/09/10 || [[arxiv:1409.3215]] || ||  || [[Seq2Seq]]
|[[Sequence to Sequence Learning with Neural Networks (Seq2Seq)]] || 2014/09/10 || [[arxiv:1409.3215]] || [[Natural Language Processing]] ||  || [[Seq2Seq]]
|-
|-
|[[Adam: A Method for Stochastic Optimization)]] || 2014/12/22 || [[arxiv:1412.6980]] ||  ||  || [[Adam]]
|[[Adam: A Method for Stochastic Optimization)]] || 2014/12/22 || [[arxiv:1412.6980]] ||  ||  || [[Adam]]
Line 33: Line 33:
|[[WaveNet: A Generative Model for Raw Audio]] || 2016/09/12 || [[arxiv:1609.03499]] || [[Audio]] ||  || [[WaveNet]]
|[[WaveNet: A Generative Model for Raw Audio]] || 2016/09/12 || [[arxiv:1609.03499]] || [[Audio]] ||  || [[WaveNet]]
|-
|-
|[[Attention Is All You Need (Transformer)]] || 2017/06/12 || [[arxiv:1706.03762]] || || [[Google]] || Influential paper that introduced [[Transformer]]
|[[Attention Is All You Need (Transformer)]] || 2017/06/12 || [[arxiv:1706.03762]] || [[Natural Language Processing]] || [[Google]] || Influential paper that introduced [[Transformer]]
|-
|-
|[[Proximal Policy Optimization Algorithms (PPO)]] || 2017/07/20 || [[arxiv:1707.06347]] ||  ||  || [[PPO]]
|[[Proximal Policy Optimization Algorithms (PPO)]] || 2017/07/20 || [[arxiv:1707.06347]] ||  ||  || [[PPO]]
Line 41: Line 41:
|[[Deep contextualized word representations (ELMo)]] || 2018/02/15 || [[arxiv:1802.05365]] || [[Natural Language Processing]] ||  || [[ELMo]]
|[[Deep contextualized word representations (ELMo)]] || 2018/02/15 || [[arxiv:1802.05365]] || [[Natural Language Processing]] ||  || [[ELMo]]
|-
|-
|[[GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding]] || 2018/04/20 || [[arxiv:1804.07461]] || [[Natural Language Processing]] ||  || [[GLUE]]
|[[GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding]] || 2018/04/20 || [[arxiv:1804.07461]]<br>[https://gluebenchmark.com/ website] || [[Natural Language Processing]] ||  || [[GLUE]]
|-
|-
|[[BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding]] || 2018/10/11 || [[arxiv:1810.04805]] || [[Natural Language Processing]] || [[Google]] || [[BERT]]
|[[BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding]] || 2018/10/11 || [[arxiv:1810.04805]] || [[Natural Language Processing]] || [[Google]] || [[BERT]]
Line 51: Line 51:
|[[Language Models are Few-Shot Learners (GPT-3)]] || 2020/05/28 || [[arxiv:2005.14165]] || [[Natural Language Processing]] || [[OpenAI]] || [[GPT-3]]
|[[Language Models are Few-Shot Learners (GPT-3)]] || 2020/05/28 || [[arxiv:2005.14165]] || [[Natural Language Processing]] || [[OpenAI]] || [[GPT-3]]
|-
|-
|[[An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (ViT)]] || 2020/10/22 || [[arxiv:2010.11929]] || ||  || [[Vision Transformer]] ([[ViT]])
|[[An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (ViT)]] || 2020/10/22 || [[arxiv:2010.11929]] || [[Computer Vision]] ||  || [[Vision Transformer]] ([[ViT]])
|-
|-
|[[Learning Transferable Visual Models From Natural Language Supervision (CLIP)]] || 2021/02/26 || [[arxiv:2103.00020]]<br>[https://openai.com/blog/clip/ OpenAI Blog] || ||  ||  
|[[Learning Transferable Visual Models From Natural Language Supervision (CLIP)]] || 2021/02/26 || [[arxiv:2103.00020]]<br>[https://openai.com/blog/clip/ OpenAI Blog] || [[Computer Vision]] ||  || [[CLIP]]
|-
|-
|[[MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer]] || 2021/10/05 || [[arxiv:2110.02178]] || ||  || [[MobileViT]]
|[[MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer]] || 2021/10/05 || [[arxiv:2110.02178]] || [[Computer Vision]] ||  || [[MobileViT]]
|-
|-
|[[Block-Recurrent Transformers]] || 2022/03/11 || [[arxiv:2203.07852]] ||  ||  ||  
|[[Block-Recurrent Transformers]] || 2022/03/11 || [[arxiv:2203.07852]] ||  ||  ||