Papers: Difference between revisions
No edit summary |
|||
Line 83: | Line 83: | ||
|[[MusicLM: Generating Music From Text]] || 2023/01/26 ||[[arxiv:2301.11325]]<br>[https://google-research.github.io/seanet/musiclm/examples/ blog post] || [[Audio]] || [[Google]] || [[MusicLM]] | |[[MusicLM: Generating Music From Text]] || 2023/01/26 ||[[arxiv:2301.11325]]<br>[https://google-research.github.io/seanet/musiclm/examples/ blog post] || [[Audio]] || [[Google]] || [[MusicLM]] | ||
|- | |- | ||
|[[Mastering Diverse Domains through World Models (DreamerV3)]] || 2023/01/10 ||[[arxiv:2301.04104v1]] || || [[DeepMind]] || [[DreamerV3]] | |[[Mastering Diverse Domains through World Models (DreamerV3)]] || 2023/01/10 ||[[arxiv:2301.04104v1]]<br>[https://danijar.com/project/dreamerv3/ blogpost] || || [[DeepMind]] || [[DreamerV3]] | ||
|- | |- | ||
|[[Muse: Text-To-Image Generation via Masked Generative Transformers]] || 2023/01/02 ||[[arxiv:2301.00704]]<br>[https://muse-model.github.io/ blog post] || || [[Google]] || [[Muse]] | |[[Muse: Text-To-Image Generation via Masked Generative Transformers]] || 2023/01/02 ||[[arxiv:2301.00704]]<br>[https://muse-model.github.io/ blog post] || [[Computer Vision]] || [[Google]] || [[Muse]] | ||
|- | |- | ||
|[[Constitutional AI: Harmlessness from AI Feedback]] || 2021/12/12 || [[arxiv:2212.08073]] || | |[[Constitutional AI: Harmlessness from AI Feedback]] || 2021/12/12 || [[arxiv:2212.08073]] || [[Natural Language Processing]] || [[Anthropic]] || [[Constitutional AI]], [[Claude]] | ||
|- | |- | ||
|[[InstructPix2Pix: Learning to Follow Image Editing Instructions]] || 2021/11/17 || [[arxiv:2211.09800]]<br>[https://www.timothybrooks.com/instruct-pix2pix Blog Post] || | |[[InstructPix2Pix: Learning to Follow Image Editing Instructions]] || 2021/11/17 || [[arxiv:2211.09800]]<br>[https://www.timothybrooks.com/instruct-pix2pix Blog Post] || [[Computer Vision]] || [[UC Berkley]] || [[InstructPix2Pix]] | ||
|- | |- | ||
|} | |} |