Papers: Difference between revisions

128 bytes added ,  7 February 2023
no edit summary
No edit summary
Line 14: Line 14:
|[[Efficient Estimation of Word Representations in Vector Space (Word2Vec)]] || 2013/01/16 || [[arxiv:1301.3781]] || [[Natural Language Processing]] ||  || [[Word2Vec]]
|[[Efficient Estimation of Word Representations in Vector Space (Word2Vec)]] || 2013/01/16 || [[arxiv:1301.3781]] || [[Natural Language Processing]] ||  || [[Word2Vec]]
|-
|-
|[[Playing Atari with Deep Reinforcement Learning (DQN)]] || 2013/12/19 || [[arxiv:1312.5602]] ||  ||  || [[DQN]]
|[[Playing Atari with Deep Reinforcement Learning (DQN)]] || 2013/12/19 || [[arxiv:1312.5602]] ||  ||  || [[DQN]]<br>[[Deep Q-Learning]]
|-
|-
|[[Generative Adversarial Networks (GAN)]] || 2014/06/10 || [[arxiv:1406.2661]] ||  ||  || [[GAN]]
|[[Generative Adversarial Networks (GAN)]] || 2014/06/10 || [[arxiv:1406.2661]] ||  ||  || [[GAN]]<br>[[Generative Adversarial Network]]
|-
|-
|[[Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet)]] || 2014/09/04 || [[arxiv:1409.1556]] || [[Computer Vision]] ||  || [[VGGNet]]
|[[Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet)]] || 2014/09/04 || [[arxiv:1409.1556]] || [[Computer Vision]] ||  || [[VGGNet]]
Line 34: Line 34:
|[[Attention Is All You Need (Transformer)]] || 2017/06/12 || [[arxiv:1706.03762]] || [[Natural Language Processing]] || [[Google]] || Influential paper that introduced [[Transformer]]
|[[Attention Is All You Need (Transformer)]] || 2017/06/12 || [[arxiv:1706.03762]] || [[Natural Language Processing]] || [[Google]] || Influential paper that introduced [[Transformer]]
|-
|-
|[[Proximal Policy Optimization Algorithms (PPO)]] || 2017/07/20 || [[arxiv:1707.06347]] ||  ||  || [[PPO]]
|[[Proximal Policy Optimization Algorithms (PPO)]] || 2017/07/20 || [[arxiv:1707.06347]] ||  ||  || [[PPO]]<br>[[Proximal Policy Optimization]]
|-
|-
|[[Improving Language Understanding by Generative Pre-Training (GPT)]] || 2018 || [https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf paper source] || [[Natural Language Processing]] || [[OpenAI]] || [[GPT]]
|[[Improving Language Understanding by Generative Pre-Training (GPT)]] || 2018 || [https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf paper source] || [[Natural Language Processing]] || [[OpenAI]] || [[GPT]]<br>[[Generative Pre-Training]]
|-
|-
|[[Deep contextualized word representations (ELMo)]] || 2018/02/15 || [[arxiv:1802.05365]] || [[Natural Language Processing]] ||  || [[ELMo]]
|[[Deep contextualized word representations (ELMo)]] || 2018/02/15 || [[arxiv:1802.05365]] || [[Natural Language Processing]] ||  || [[ELMo]]