Blog posts: Difference between revisions

188 bytes added ,  12 February 2023
no edit summary
No edit summary
No edit summary
Line 9: Line 9:
!Rating
!Rating
|-
|-
|[[Tim Urban]] || 2015 || [[The AI Revolution: The Road to Superintelligence]]  ||  || ★★★★  
|[[Tim Urban]] || 2015 || [[The AI Revolution: The Road to Superintelligence]] || ||  || ★★★★  
|-
|-
|[[Andrej Karpathy]] || 2017 || [[Software 2.0]]  ||  || ★★★★
|[[Andrej Karpathy]] || 2017 || [[Software 2.0]] || ||  || ★★★★
|-
|-
|[[Gwern]] || 2020 || [[The Scaling Hypothesis]]  ||  || ★★★★
|[[Gwern]] || 2020 || [[The Scaling Hypothesis]] || ||  || ★★★★
|-
|-
|[[Wikipedia]] || || [[The History of Artificial Intelligence]]  ||  || ★★★★
|[[Wikipedia]] || || [[The History of Artificial Intelligence]] || ||  || ★★★★
|-
|-
|[[Google Research, Vaswani et al.]] || 2017 || [[Attention Is All You Need]]  ||  || ★★★★
|[[Google Research, Vaswani et al.]] || 2017 || [[Attention Is All You Need]] || ||  || ★★★★
|-
|-
|[[Eliezer Yudkowsky]] || 2001 || [[Creating Friendly AI Summary]]  ||  || ★★★
|[[Eliezer Yudkowsky]] || 2001 || [[Creating Friendly AI Summary]] || ||  || ★★★
|-
|-
|[[Tim Urban]] || 2015 || [[The AI Revolution: Our Immortality or Extinction]]  ||  || ★★★
|[[Tim Urban]] || 2015 || [[The AI Revolution: Our Immortality or Extinction]] || ||  || ★★★
|-
|-
|[[Deepmind]] || 2017 || [[AlphaGo]]  ||  || ★★★
|[[Deepmind]] || 2017 || [[AlphaGo]] || ||  || ★★★
|-
|-
|[[Eliezer Yudkowsky]] || 2017 || [[There’s no Fire Alarm for Artificial Intelligence]]  ||  || ★★★
|[[Eliezer Yudkowsky]] || 2017 || [[There’s no Fire Alarm for Artificial Intelligence]] || ||  || ★★★
|-
|-
|[[Jay Alammar]] || 2018 || [[The Illustrated Transformer]]  ||  || ★★★
|[[Jay Alammar]] || 2018 || [[The Illustrated Transformer]] || ||  || ★★★
|-
|-
|[[OpenAI]] || 2020 || [[Language Models are Few Shot Learners (GPT-3)]] || || ★★★
|[[OpenAI]] || 2020 || [[Language Models are Few Shot Learners (GPT-3)]] || || ★★★
|-
|-
|[[Deepmind]] || 2020 || [[MuZero: Mastering Go, Chess, Shogi, and Atari without rules]]  ||  || ★★★
|[[Deepmind]] || 2020 || [[MuZero: Mastering Go, Chess, Shogi, and Atari without rules]] || ||  || ★★★
|-
|-
|[[OpenAI]] || 2020 || [[AI and Efficiency]]  ||  || ★★★
|[[OpenAI]] || 2020 || [[AI and Efficiency]] || ||  || ★★★
|-
|-
|[[OpenAI]] || 2021 || [[CLIP: Connecting Text and Images]]  ||  || ★★★
|[[OpenAI]] || 2021 || [[CLIP: Connecting Text and Images]] || ||  || ★★★
|-
|-
|[[Nostalgebraist]] || 2022 || [[Chinchilla’s Wild Implications]]  ||  || ★★★
|[[Nostalgebraist]] || 2022 || [[Chinchilla’s Wild Implications]] || ||  || ★★★
|-
|-
|[[Gwern]] || 2022 || [[ It looks like you’re trying to take over the world]]  ||  || ★★★
|[[Gwern]] || 2022 || [[ It looks like you’re trying to take over the world]] || ||  || ★★★
|-
|-
|[[Scott Alexander]] || 2016 || [[Superintelligence FAQ]]  ||  || ★★
|[[Scott Alexander]] || 2016 || [[Superintelligence FAQ]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2017 || [[Learning to Communicate]]  ||  || ★★
|[[OpenAI]] || 2017 || [[Learning to Communicate]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2017 || [[Evolution Strategies as an Alternative to Reinforcement Learning]]  ||  || ★★
|[[OpenAI]] || 2017 || [[Evolution Strategies as an Alternative to Reinforcement Learning]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2017 || [[Learning to Cooperate, Compete, and Communicate]]  ||  || ★★
|[[OpenAI]] || 2017 || [[Learning to Cooperate, Compete, and Communicate]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2017 || [[Proximal Policy Optimization]]  ||  || ★★
|[[OpenAI]] || 2017 || [[Proximal Policy Optimization]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2017 || [[Competitive Self-Play]]  ||  || ★★
|[[OpenAI]] || 2017 || [[Competitive Self-Play]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2018 || [[AI and Compute]]  ||  || ★★
|[[OpenAI]] || 2018 || [[AI and Compute]] || ||  || ★★
|-
|-
|[[Deepmind]] || 2019 || [[Capture the Flag: the emergence of complex cooperative agents]]  ||  || ★★
|[[Deepmind]] || 2019 || [[Capture the Flag: the emergence of complex cooperative agents]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2019 || [[OpenAI Five defeats Dota 2 World Champions]]  ||  || ★★
|[[OpenAI]] || 2019 || [[OpenAI Five defeats Dota 2 World Champions]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2019 || [[Deep Double Descent]]  ||  || ★★
|[[OpenAI]] || 2019 || [[Deep Double Descent]] || ||  || ★★
|-
|-
|[[Jay Alammar]] || 2020 || [[How GPT-3 Works: Visualizations and Animations]]  ||  || ★★
|[[Jay Alammar]] || 2020 || [[How GPT-3 Works: Visualizations and Animations]] || ||  || ★★
|-
|-
|[[Evhub]] || 2020 || [[11 Proposals of for Building Safe Advanced AI]]  ||  || ★★
|[[Evhub]] || 2020 || [[11 Proposals of for Building Safe Advanced AI]] || ||  || ★★
|-
|-
|[[Deepmind]] || 2022 || [[Discovering novel algorithms with AlphaTensor]]  ||  || ★★
|[[Deepmind]] || 2022 || [[Discovering novel algorithms with AlphaTensor]] || ||  || ★★
|-
|-
|[[Deepmind]] || 2020 || [[Using JAX to accelerate our research]]  ||  || ★★
|[[Deepmind]] || 2020 || [[Using JAX to accelerate our research]] || ||  || ★★
|-
|-
|[[Deepmind]] || 2020 || [[AlphaFold: a solution to a 50-year-old grand challenge in biology]]  ||  || ★★
|[[Deepmind]] || 2020 || [[AlphaFold: a solution to a 50-year-old grand challenge in biology]] || ||  || ★★
|-
|-
|[[Leigh Marie Braswell]] || 2021 || [[Startup Opportunities in Machine Learning Infrastructure]]  ||  || ★★
|[[Leigh Marie Braswell]] || 2021 || [[Startup Opportunities in Machine Learning Infrastructure]] || ||  || ★★
|-
|-
|[[Meta]] || 2021 || [[Teaching AI how to forget at scale]]  ||  || ★★
|[[Meta]] || 2021 || [[Teaching AI how to forget at scale]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2021 || [[DALL-E: Creating Images from Text]]  ||  || ★★
|[[OpenAI]] || 2021 || [[DALL-E: Creating Images from Text]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2021 || [[Multimodal Neurons in Artificial Neural Networks]]  ||  || ★★
|[[OpenAI]] || 2021 || [[Multimodal Neurons in Artificial Neural Networks]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2021 || [[Improving Language Model Behavior by Training on a Curated Dataset]]  ||  || ★★
|[[OpenAI]] || 2021 || [[Improving Language Model Behavior by Training on a Curated Dataset]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2021 || [[OpenAI Codex]]  ||  || ★★
|[[OpenAI]] || 2021 || [[OpenAI Codex]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2022 || [[Aligning Language Models to Follow Instructions]]  ||  || ★★
|[[OpenAI]] || 2022 || [[Aligning Language Models to Follow Instructions]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2022 || [[DALL-E 2]]  ||  || ★★
|[[OpenAI]] || 2022 || [[DALL-E 2]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2022 || [[Learning to Play Minecraft with Video PreTraining]]  ||  || ★★
|[[OpenAI]] || 2022 || [[Learning to Play Minecraft with Video PreTraining]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2022 || [[Introducing Whisper]]  ||  || ★★
|[[OpenAI]] || 2022 || [[Introducing Whisper]] || ||  || ★★
|-
|-
|[[OpenAI]] || 2022 || [[ChatGPT: Optimizing Language Models for Dialog]]  ||  || ★★
|[[OpenAI]] || 2022 || [[ChatGPT: Optimizing Language Models for Dialog]] || ||  || ★★
|-
|-
|[[Nature]] || 2022 || [[What’s next for AlphaFold]]  ||  || ★★
|[[Nature]] || 2022 || [[What’s next for AlphaFold]] || ||  || ★★
|-
|-
|[[Meta]] || 2022 || [[CICERO: AN AI agent that negotiates, persuades, and cooperates with people]]  ||  || ★★
|[[Meta]] || 2022 || [[CICERO: AN AI agent that negotiates, persuades, and cooperates with people]] || ||  || ★★
|-
|-
|[[Yann Lecun]] || 2022 || [[How to make AI systems learn and reason like animals and humans]]  ||  || ★★
|[[Yann Lecun]] || 2022 || [[How to make AI systems learn and reason like animals and humans]] || ||  || ★★
|-
|-
|[[Robert May]] || 2022 || [[The Mental Model Most AI Investors Are Missing]]  ||  || ★★
|[[Robert May]] || 2022 || [[The Mental Model Most AI Investors Are Missing]] || ||  || ★★
|-
|-
|[[Roon]] || 2022 || [[Text is the Universal Interface]]  ||  || ★★
|[[Roon]] || 2022 || [[Text is the Universal Interface]] || ||  || ★★
|-
|-
|[[Jack Soslow]] || 2022 || [[2022 AI Research and Trends Round Up]]  ||  || ★★
|[[Jack Soslow]] || 2022 || [[2022 AI Research and Trends Round Up]] || ||  || ★★
|-
|-
|}
|}