Papers: Difference between revisions

30 bytes removed ,  6 February 2023
no edit summary
No edit summary
No edit summary
Line 7: Line 7:
!Note
!Note
|-
|-
|[[ImageNet Classification with Deep Convolutional Neural Networks (AlexNet) paper]] || 2012 || [https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf AlexNet Paper] ||  
|[[ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)]] || 2012 || [https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf AlexNet Paper] ||  
|-
|-
|[[Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet) paper]] || 2014/09/04 || [[arxiv:409.1556]] ||  
|[[Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet)]] || 2014/09/04 || [[arxiv:409.1556]] ||  
|-
|-
|[[Deep Residual Learning for Image Recognition (ResNet) paper]] || 2015/12/10 || [[arxiv:409.1556]] ||  
|[[Deep Residual Learning for Image Recognition (ResNet)]] || 2015/12/10 || [[arxiv:409.1556]] ||  
|-
|-
|[[Going Deeper with Convolutions (GoogleNet) paper]] || 2015/12/10 || [[arxiv:409.1556]] ||  
|[[Going Deeper with Convolutions (GoogleNet)]] || 2015/12/10 || [[arxiv:409.1556]] ||  
|-
|-
|[[Attention Is All You Need (Transformer) paper]] || 2017/06/12 || [[arxiv:1706.03762]] || influential paper that introduced [[Transformer]]
|[[Attention Is All You Need (Transformer)]] || 2017/06/12 || [[arxiv:1706.03762]] || influential paper that introduced [[Transformer]]
|-
|-
|[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] || Attentive Language Models Beyond a Fixed-Length Context
|[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] || Attentive Language Models Beyond a Fixed-Length Context