370
edits
No edit summary |
No edit summary |
||
Line 29: | Line 29: | ||
'''[[CLIP]]''' ([[Contrastive Language–Image Pre-training]]) - https://arxiv.org/abs/2103.00020, https://openai.com/blog/clip/ - encode images and text into representations that can be compared in the same space. basis for many [[Text-to-Image Models]] like [[Stable Diffusion]] | '''[[CLIP]]''' ([[Contrastive Language–Image Pre-training]]) - https://arxiv.org/abs/2103.00020, https://openai.com/blog/clip/ - encode images and text into representations that can be compared in the same space. basis for many [[Text-to-Image Models]] like [[Stable Diffusion]] | ||
'''[[Chain of Thought Prompting]]''' | '''[[Chain of Thought Prompting]]''' - https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html | ||
'''[[CodeGen]]''' - https://github.com/salesforce/CodeGen, https://arxiv.org/abs/2203.13474 | '''[[CodeGen]]''' - https://github.com/salesforce/CodeGen, https://arxiv.org/abs/2203.13474 |
edits