Papers: Difference between revisions
No edit summary |
No edit summary |
||
Line 19: | Line 19: | ||
|'''[[MobileViT]]''' || [[arxiv:2110.02178]] || Light-weight, General-purpose, and Mobile-friendly Vision Transformer | |'''[[MobileViT]]''' || [[arxiv:2110.02178]] || Light-weight, General-purpose, and Mobile-friendly Vision Transformer | ||
|- | |- | ||
|'''[[OpenAI CLIP]]''' || [[arxiv:2103.00020]]<br> | |'''[[OpenAI CLIP]]''' || [[arxiv:2103.00020]]<br>[https://openai.com/blog/clip/ OpenAI Blog] || Learning Transferable Visual Models From Natural Language Supervision | ||
|- | |- | ||
|'''[[STaR]]''' || [[arxiv:2203.14465]] || Bootstrapping Reasoning With Reasoning | |'''[[STaR]]''' || [[arxiv:2203.14465]] || Bootstrapping Reasoning With Reasoning |
Revision as of 21:50, 5 February 2023
Important
Name | Date | Source | Note |
---|---|---|---|
Attention Is All You Need | arxiv:1706.03762 | influential paper that introduced Transformer | |
An Image is Worth 16x16 Words | arxiv:2010.11929 | Transformers for Image Recognition at Scale - Vision Transformer (ViT) | |
Block-Recurrent Transformers | arxiv:2203.07852 | ||
Language Models are Few-Shot Learners | arxiv:2005.14165 | GPT | |
Memorizing Transformers | arxiv:2203.08913 | ||
MobileViT | arxiv:2110.02178 | Light-weight, General-purpose, and Mobile-friendly Vision Transformer | |
OpenAI CLIP | arxiv:2103.00020 OpenAI Blog |
Learning Transferable Visual Models From Natural Language Supervision | |
STaR | arxiv:2203.14465 | Bootstrapping Reasoning With Reasoning | |
Transformer-XL | arxiv:1901.02860 | Attentive Language Models Beyond a Fixed-Length Context |
Others
https://arxiv.org/abs/2301.13779 (FLAME: A small language model for spreadsheet formulas) - Small model specifically for spreadsheets by Miscrofot