Papers: Difference between revisions

231 bytes added ,  6 February 2023
no edit summary
No edit summary
No edit summary
Line 7: Line 7:
!Note
!Note
|-
|-
|[[AlexNet paper]] || 2012 || [https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf AlexNet Paper] || ImageNet Classification with Deep Convolutional Neural Networks
|[[ImageNet Classification with Deep Convolutional Neural Networks (AlexNet) paper]] || 2012 || [https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf AlexNet Paper] ||  
|-
|-
|[[VGGNet paper]] || 2014/09/04 || [[arxiv:409.1556]] || Very Deep Convolutional Networks for Large-Scale Image Recognition
|[[Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet) paper]] || 2014/09/04 || [[arxiv:409.1556]] ||  
|-
|-
|[[Attention Is All You Need]] || 2017/06/12 || [[arxiv:1706.03762]] || influential paper that introduced [[Transformer]]
|[[Deep Residual Learning for Image Recognition (ResNet) paper]] || 2015/12/10 || [[arxiv:409.1556]] ||
|-
|[[Going Deeper with Convolutions (GoogleNet) paper]] || 2015/12/10 || [[arxiv:409.1556]] ||
|-
|[[Attention Is All You Need (Transformer) paper]] || 2017/06/12 || [[arxiv:1706.03762]] || influential paper that introduced [[Transformer]]
|-
|-
|[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] || Attentive Language Models Beyond a Fixed-Length Context
|[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] || Attentive Language Models Beyond a Fixed-Length Context