Interface administrators, Administrators (Semantic MediaWiki), Curators (Semantic MediaWiki), Editors (Semantic MediaWiki), Suppressors, Administrators
7,785
edits
No edit summary |
No edit summary |
||
Line 5: | Line 5: | ||
!Submission<br>Date | !Submission<br>Date | ||
!Source | !Source | ||
!Type | |||
!Note | !Note | ||
|- | |- | ||
Line 16: | Line 17: | ||
|- | |- | ||
|[[Going Deeper with Convolutions (GoogleNet)]] || 2015/12/10 || [[arxiv:409.1556]] || | |[[Going Deeper with Convolutions (GoogleNet)]] || 2015/12/10 || [[arxiv:409.1556]] || | ||
|- | |||
|[[Asynchronous Methods for Deep Reinforcement Learning (A3C)]] || 2016/02/04 || [[arxiv:1602.01783]] || | |||
|- | |- | ||
|[[Attention Is All You Need (Transformer)]] || 2017/06/12 || [[arxiv:1706.03762]] || influential paper that introduced [[Transformer]] | |[[Attention Is All You Need (Transformer)]] || 2017/06/12 || [[arxiv:1706.03762]] || influential paper that introduced [[Transformer]] | ||
|- | |||
|[[Proximal Policy Optimization Algorithms (PPO)]] || 2017/07/20 || [[arxiv:1707.06347]] || | |||
|- | |- | ||
|[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] || Attentive Language Models Beyond a Fixed-Length Context | |[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] || Attentive Language Models Beyond a Fixed-Length Context |