Papers: Difference between revisions
No edit summary |
No edit summary |
||
Line 7: | Line 7: | ||
!Note | !Note | ||
|- | |- | ||
|[[ImageNet Classification with Deep Convolutional Neural Networks (AlexNet) | |[[ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)]] || 2012 || [https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf AlexNet Paper] || | ||
|- | |- | ||
|[[Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet) | |[[Very Deep Convolutional Networks for Large-Scale Image Recognition (VGGNet)]] || 2014/09/04 || [[arxiv:409.1556]] || | ||
|- | |- | ||
|[[Deep Residual Learning for Image Recognition (ResNet) | |[[Deep Residual Learning for Image Recognition (ResNet)]] || 2015/12/10 || [[arxiv:409.1556]] || | ||
|- | |- | ||
|[[Going Deeper with Convolutions (GoogleNet) | |[[Going Deeper with Convolutions (GoogleNet)]] || 2015/12/10 || [[arxiv:409.1556]] || | ||
|- | |- | ||
|[[Attention Is All You Need (Transformer) | |[[Attention Is All You Need (Transformer)]] || 2017/06/12 || [[arxiv:1706.03762]] || influential paper that introduced [[Transformer]] | ||
|- | |- | ||
|[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] || Attentive Language Models Beyond a Fixed-Length Context | |[[Transformer-XL]] || 2019/01/09 || [[arxiv:1901.02860]] || Attentive Language Models Beyond a Fixed-Length Context |
Revision as of 01:00, 6 February 2023
Important
Others
https://arxiv.org/abs/2301.13779 (FLAME: A small language model for spreadsheet formulas) - Small model specifically for spreadsheets by Miscrofot