Papers: Difference between revisions

9 bytes added ,  6 February 2023
no edit summary
No edit summary
Line 46: Line 46:
|[[Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context]] || 2019/01/09 || [[arxiv:1901.02860]] ||  || [[Transformer-XL]]
|[[Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context]] || 2019/01/09 || [[arxiv:1901.02860]] ||  || [[Transformer-XL]]
|-
|-
|[[Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model]] || 2019/11/19 || [[arxiv:1911.08265]] ||  || [[MuZero]]
|[[Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model (MuZero)]] || 2019/11/19 || [[arxiv:1911.08265]] ||  || [[MuZero]]
|-
|-
|[[Language Models are Few-Shot Learners (GPT-3)]] || 2020/05/28 || [[arxiv:2005.14165]] || [[NLP]] || [[GPT-3]]
|[[Language Models are Few-Shot Learners (GPT-3)]] || 2020/05/28 || [[arxiv:2005.14165]] || [[NLP]] || [[GPT-3]]