Papers: Difference between revisions

From AI Wiki
No edit summary
No edit summary
Line 25: Line 25:
|-
|-
|[[Proximal Policy Optimization Algorithms (PPO)]] || 2017/07/20 || [[arxiv:1707.06347]] ||  ||  
|[[Proximal Policy Optimization Algorithms (PPO)]] || 2017/07/20 || [[arxiv:1707.06347]] ||  ||  
|-
|[[Improving Language Understanding by Generative Pre-Training (GPT)]] || 2018 || [https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf paper source] || [[NLP]] || [[GPT]]
|-
|-
|[[Deep contextualized word representations (ELMo)]] || 2018/02/15 || [[arxiv:1802.05365]] || [[NLP]] ||  
|[[Deep contextualized word representations (ELMo)]] || 2018/02/15 || [[arxiv:1802.05365]] || [[NLP]] ||  

Revision as of 02:37, 6 February 2023

Important Papers

Name Submission
Date
Source Type Note
ImageNet Classification with Deep Convolutional Neural Networks (AlexNet) 2012 AlexNet Paper
Efficient Estimation of Word Representations in Vector Space (Word2Vec) 2013/01/16 arxiv:1301.3781 NLP
Playing Atari with Deep Reinforcement Learning (DQN) 2013/12/19 arxiv:1312.5602
Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet) 2014/09/04 arxiv:409.1556
Deep Residual Learning for Image Recognition (ResNet) 2015/12/10 arxiv:409.1556
Going Deeper with Convolutions (GoogleNet) 2015/12/10 arxiv:409.1556
Asynchronous Methods for Deep Reinforcement Learning (A3C) 2016/02/04 arxiv:1602.01783
Attention Is All You Need (Transformer) 2017/06/12 arxiv:1706.03762 influential paper that introduced Transformer
Proximal Policy Optimization Algorithms (PPO) 2017/07/20 arxiv:1707.06347
Improving Language Understanding by Generative Pre-Training (GPT) 2018 paper source NLP GPT
Deep contextualized word representations (ELMo) 2018/02/15 arxiv:1802.05365 NLP
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding 2018/04/20 arxiv:1804.07461 NLP
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 2018/10/11 arxiv:1810.04805 NLP BERT
Transformer-XL 2019/01/09 arxiv:1901.02860 Attentive Language Models Beyond a Fixed-Length Context
Language Models are Few-Shot Learners (GPT-3) 2020/05/28 arxiv:2005.14165 NLP GPT-3
An Image is Worth 16x16 Words 2020/10/22 arxiv:2010.11929 Transformers for Image Recognition at Scale - Vision Transformer (ViT)
OpenAI CLIP 2021/02/26 arxiv:2103.00020
OpenAI Blog
Learning Transferable Visual Models From Natural Language Supervision
MobileViT 2021/10/05 arxiv:2110.02178 Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Block-Recurrent Transformers 2022/03/11 arxiv:2203.07852
Memorizing Transformers 2022/03/16 arxiv:2203.08913
STaR 2022/03/28 arxiv:2203.14465 Bootstrapping Reasoning With Reasoning

Other Papers

https://arxiv.org/abs/2301.13779 (FLAME: A small language model for spreadsheet formulas) - Small model specifically for spreadsheets by Miscrofot