Videos: Difference between revisions
No edit summary |
No edit summary |
||
(4 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
! | !Creator | ||
!Date | !Date | ||
!Video Name | !Video Name | ||
!Link | |||
!Rating | !Rating | ||
|- | |- | ||
|[[Eliezer Yudkowsky]] || 2007 || [[Introducing the Singularity: Three Major Schools of Thought]] || ★★★★ | |[[Eliezer Yudkowsky]] || 2007 || [[Introducing the Singularity: Three Major Schools of Thought]] || || ★★★★ | ||
|- | |- | ||
|[[Eliezer Yudkowsky]] || 2017 || [[Difficulties of Artificial General Intelligence Alignment]] || ★★★★ | |[[Eliezer Yudkowsky]] || 2017 || [[Difficulties of Artificial General Intelligence Alignment]] || || ★★★★ | ||
|- | |- | ||
|[[Eric Elliott]] || 2020 || [[What it’s like to be a Computer: An Interview with GPT-3]] || ★★★★ | |[[Eric Elliott]] || 2020 || [[What it’s like to be a Computer: An Interview with GPT-3]] || || ★★★★ | ||
|- | |- | ||
|[[Ethan Caballero]] || 2022 || [[Broken Neural Scaling Laws]] || ★★★ | |[[Ethan Caballero]] || 2022 || [[Broken Neural Scaling Laws]] || || ★★★ | ||
|- | |- | ||
|[[Yannic Kilcher]] || 2017 || [[Attention is all you need]] || ★★★ | |[[Yannic Kilcher]] || 2017 || [[Attention is all you need]] || || ★★★ | ||
|- | |- | ||
|[[OpenAI]] || 2019 || [[Multi Agent Hide and Seek]] || ★★★ | |[[OpenAI]] || 2019 || [[Multi Agent Hide and Seek]] || || ★★★ | ||
|- | |- | ||
|[[WSJ]] || 2019 || [[How China is Using Artificial Intelligence in Classrooms]] || ★★★ | |[[WSJ]] || 2019 || [[How China is Using Artificial Intelligence in Classrooms]] || || ★★★ | ||
|- | |- | ||
|[[Yannic Kilcher]] || 2020 || [[GPT-3: Language Models are Few Shot Learners paper explained)]] || ★★★ | |[[Yannic Kilcher]] || 2020 || [[GPT-3: Language Models are Few Shot Learners paper explained)]] || || ★★★ | ||
|- | |- | ||
|[[Eliezer Yudkowsky]] || 2012 || [[Intelligence Explosion]] || ★★ | |[[Eliezer Yudkowsky]] || 2012 || [[Intelligence Explosion]] || || ★★ | ||
|- | |- | ||
|[[OpenAI]] || 2017 || [[Dota2]] || ★★ | |[[OpenAI]] || 2017 || [[Dota2]] || || ★★ | ||
|- | |- | ||
|[[Fei Fei Li]] || 2018 || [[How to make AI that’s good for people]] || ★★ | |[[Fei Fei Li]] || 2018 || [[How to make AI that’s good for people]] || || ★★ | ||
|- | |- | ||
|[[George Hotz]] || 2019 || [[Jailbreaking the Simulation with George Hotz]] || ★★ | |[[George Hotz]] || 2019 || [[Jailbreaking the Simulation with George Hotz]] || || ★★ | ||
|- | |- | ||
|[[Lex Fridman]] || 2019 || [[Deep Learning Basics: Introduction and Overview]] || ★★ | |[[Lex Fridman]] || 2019 || [[Deep Learning Basics: Introduction and Overview]] || || ★★ | ||
|- | |- | ||
|[[Sam Altman]] || 2020 || [[Sam Altman talks about AI at Big Compute 20 Tech Conference]] || ★★ | |[[Sam Altman]] || 2020 || [[Sam Altman talks about AI at Big Compute 20 Tech Conference]] || || ★★ | ||
|- | |- | ||
|[[Two Minute Papers]] || 2020 || [[OpenAI GPT-3 - Good at Almost Everything]] || ★★ | |[[Two Minute Papers]] || 2020 || [[OpenAI GPT-3 - Good at Almost Everything]] || || ★★ | ||
|- | |- | ||
|[[Jack Soslow]] || 2020 || [[Two AIs talking to each other]] || ★★ | |[[Jack Soslow]] || 2020 || [[Two AIs talking to each other]] || || ★★ | ||
|- | |- | ||
|[[Lex Fridman]] || 2020 || [[GPT-3 vs Human Brain]] || ★★ | |[[Lex Fridman]] || 2020 || [[GPT-3 vs Human Brain]] || || ★★ | ||
|- | |- | ||
|[[Harrison Kinsley and Daniel Kukiela]] || 2020 | |[[Harrison Kinsley and Daniel Kukiela]] || 2020 || [[Neural Networks from Scratch - p.3 The Dot Product]] || || ★★ | ||
|- | |- | ||
|[[Lex Fridman]] || 2020 || [[Deep Learning State of the Art]] || ★★ | |[[Lex Fridman]] || 2020 || [[Deep Learning State of the Art]] || || ★★ | ||
|- | |- | ||
|[[Bycloud]] || 2020 || [[What Happens When AI Robots Design Themselves]] || ★★ | |[[Bycloud]] || 2020 || [[What Happens When AI Robots Design Themselves]] || || ★★ | ||
|- | |- | ||
|[[Two Minute Papers]] || 2020 || [[MuZero: DeepMind’s New AI Mastered More Than 50 Games]] || ★★ | |[[Two Minute Papers]] || 2020 || [[MuZero: DeepMind’s New AI Mastered More Than 50 Games]] || || ★★ | ||
|- | |- | ||
|[[Mira Murati]] || 2020 || [[Ensuring AGI benefits all of humanity]] || ★★ | |[[Mira Murati]] || 2020 || [[Ensuring AGI benefits all of humanity]] || || ★★ | ||
|- | |- | ||
|[[Two Minute Papers]] || 2020 || [[Can an AI Design Our Tax Policy?]] || ★★ | |[[Two Minute Papers]] || 2020 || [[Can an AI Design Our Tax Policy?]] || || ★★ | ||
|- | |- | ||
|[[Neat AI]] || 2021 || [[Lenia — Conway’s game of life arrives in the 21st century]] || ★★ | |[[Neat AI]] || 2021 || [[Lenia — Conway’s game of life arrives in the 21st century]] || || ★★ | ||
|- | |- | ||
|[[Sam Altman]] || 2021 || [[The Future of AI Research from DALL-E to GPT-3]] || ★★ | |[[Sam Altman]] || 2021 || [[The Future of AI Research from DALL-E to GPT-3]] || || ★★ | ||
|- | |- | ||
|[[Yannic Kilcher]] || 2021 || [[How far can we scale up? Deep Learning’s Diminishing Returns | |[[Yannic Kilcher]] || 2021 || [[How far can we scale up? Deep Learning’s Diminishing Returns (Article Review)]] || || ★★ | ||
|- | |- | ||
|[[Two Minute Papers]] || 2021 || [[Watch Tesla’s Self-Driving Car Learning in a Simulation]] || ★★ | |[[Two Minute Papers]] || 2021 || [[Watch Tesla’s Self-Driving Car Learning in a Simulation]] || || ★★ | ||
|- | |- | ||
|[[Ethan Caballero]] || 2022 || [[Scale is all you need]] || ★★ | |[[Ethan Caballero]] || 2022 || [[Scale is all you need]] || || ★★ | ||
|- | |- | ||
|[[Two Minute Papers]] || 2022 || [[DeepMind AlphaFold A Gift to Humanity]] || ★★ | |[[Two Minute Papers]] || 2022 || [[DeepMind AlphaFold A Gift to Humanity]] || || ★★ | ||
|- | |- | ||
|[[OpenAI]] || 2022 || [[Aligning AI Systems with Human Intent]] || ★★ | |[[OpenAI]] || 2022 || [[Aligning AI Systems with Human Intent]] || || ★★ | ||
|- | |- | ||
|[[NeatAI]] || 2022 || [[AI Learns NEAT Pacman Solution]] || ★★ | |[[NeatAI]] || 2022 || [[AI Learns NEAT Pacman Solution]] || || ★★ | ||
|- | |- | ||
|[[Yannic Kilcher]] || 2022 || [[Grokking: Generalization beyond Overfitting on small algorithmic datasets | |[[Yannic Kilcher]] || 2022 || [[Grokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained)]] || || ★★ | ||
|- | |- | ||
|[[The Third Build]] || 2022 || [[How an AI is Becoming the World’s Best Pokemon Player]] || ★★ | |[[The Third Build]] || 2022 || [[How an AI is Becoming the World’s Best Pokemon Player]] || || ★★ | ||
|- | |- | ||
|} | |} |
Latest revision as of 17:22, 11 February 2023
- See also: Guides