User contributions for Daikon Radish
23 March 2023
- 05:0205:02, 23 March 2023 diff hist +131 Microsoft 365 Copilot No edit summary
- 04:5704:57, 23 March 2023 diff hist +77 Microsoft 365 Copilot No edit summary
- 04:5704:57, 23 March 2023 diff hist +27 Artificial intelligence applications →Content Generation (Text-to-Text)
- 04:5504:55, 23 March 2023 diff hist +35 N Microsoft Copilot Redirected page to Microsoft 365 Copilot current Tag: New redirect
- 01:5101:51, 23 March 2023 diff hist +3,944 N Microsoft 365 Copilot Created page with "Microsoft 365 Copilot, introduced on March 16, 2023, is a groundbreaking AI assistant designed to transform the way people work by increasing productivity, enhancing creativity, and improving collaboration. Developed by Microsoft, the Copilot aims to eliminate mundane tasks and maximize efficiency by harnessing the power of large language models (LLMs) and integrating with Microsoft 365 applications and data. ==Overview== Microsoft 365 Copilot is an AI-driven tool that..."
11 March 2023
- 23:3123:31, 11 March 2023 diff hist +58 Artificial intelligence applications →Other Text-based Apps
- 19:0019:00, 11 March 2023 diff hist +240 Prompt engineering →Tone Combinations and Use Cases
- 18:5618:56, 11 March 2023 diff hist +639 Prompt engineering →Tones
- 18:5118:51, 11 March 2023 diff hist +1 Prompt engineering →Suggested Tons
- 18:5018:50, 11 March 2023 diff hist +463 Prompt engineering →Tones
- 18:4618:46, 11 March 2023 diff hist +232 Prompt engineering →Tones
- 18:4218:42, 11 March 2023 diff hist +301 Prompt engineering →Tones
- 18:3718:37, 11 March 2023 diff hist +213 Prompt engineering →Parameters
- 18:3018:30, 11 March 2023 diff hist +87 Prompt engineering →List of Parameters
9 March 2023
- 18:4318:43, 9 March 2023 diff hist −2 PaLM-E: An Embodied Multimodal Language Model No edit summary current
- 18:4318:43, 9 March 2023 diff hist −3 PaLM-E: An Embodied Multimodal Language Model No edit summary
- 18:4118:41, 9 March 2023 diff hist 0 PaLM-E: An Embodied Multimodal Language Model No edit summary
- 18:4118:41, 9 March 2023 diff hist 0 Activation function No edit summary
- 18:4118:41, 9 March 2023 diff hist +2,465 PaLM-E: An Embodied Multimodal Language Model No edit summary
- 15:3415:34, 9 March 2023 diff hist +1,449 N PaLM-E: An Embodied Multimodal Language Model Created page with "{{see also|PaLM-E|Papers}} ==Explore Like I'm 5 (ELI5)== ==Abstract== Large language models have been demonstrated to perform complex tasks. However, enabling general inference in the real world, e.g. for robotics problems, raises the challenge of grounding. We propose embodied language models to directly incorporate real-world continuous sensor modalities into language models and thereby establish the link between words and percepts. Input to our embodied language mode..."
- 15:3015:30, 9 March 2023 diff hist +1 Papers →Other Papers
- 15:1815:18, 9 March 2023 diff hist −30 Papers →Other Papers
- 15:1715:17, 9 March 2023 diff hist 0 Papers →Other Papers
- 15:1415:14, 9 March 2023 diff hist −1 Papers No edit summary
- 15:1415:14, 9 March 2023 diff hist +222 Papers No edit summary
- 14:5714:57, 9 March 2023 diff hist +18 Papers →Other Papers
- 14:3014:30, 9 March 2023 diff hist +225 Papers →Other Papers
6 March 2023
- 17:0517:05, 6 March 2023 diff hist +1 Prompt engineering →Prompt Engineering for Code Generation Models
- 17:0517:05, 6 March 2023 diff hist +35 Prompt engineering →Prompt Engineering for Code Generation Models
- 16:5916:59, 6 March 2023 diff hist +47 Prompt engineering →Prompt Engineering for Code Generation Models
- 16:5916:59, 6 March 2023 diff hist +27 N File:Coding model diagram1.png File uploaded with MsUpload current
- 16:5716:57, 6 March 2023 diff hist +266 Prompt engineering →Prompt Engineering for Code Generation Models
- 16:5016:50, 6 March 2023 diff hist +3 Prompt engineering No edit summary
- 16:5016:50, 6 March 2023 diff hist +864 Prompt engineering →Context
- 16:4716:47, 6 March 2023 diff hist +16 Prompt engineering →Prompt Engineering for Coding
- 16:4516:45, 6 March 2023 diff hist +53 Prompt engineering →Examples
- 16:4416:44, 6 March 2023 diff hist +553 Prompt engineering →Examples
- 14:0914:09, 6 March 2023 diff hist +47 Guides No edit summary
- 14:0714:07, 6 March 2023 diff hist +4 Zero shot, one shot and few shot learning No edit summary current
- 14:0714:07, 6 March 2023 diff hist −298 Zero shot, one shot and few shot learning No edit summary
- 14:0314:03, 6 March 2023 diff hist +530 Zero shot, one shot and few shot learning No edit summary
- 13:2013:20, 6 March 2023 diff hist +823 Zero shot, one shot and few shot learning No edit summary
- 13:1913:19, 6 March 2023 diff hist +1,438 N Zero shot, one shot and few shot learning Created page with "{{Needs Expansion}} Zero shot learning is when you have no examples in the prompt. One shot learning is when you have 1 example in the prompt. Few shot learning is when you have a few examples in the prompt. all of these techniques allow the machine learning model to learn with limited or no labeled data. ==Zero Shot Learning== Zero-shot learning, one-shot learning, and few-shot learning are all machine learning techniques used to train model..."
- 12:2412:24, 6 March 2023 diff hist +746 Prompt engineering →Prompt Engineering for Coding
- 12:1212:12, 6 March 2023 diff hist +97 Prompt engineering No edit summary
5 March 2023
- 19:2219:22, 5 March 2023 diff hist +132 Prompt engineering No edit summary
- 19:2119:21, 5 March 2023 diff hist +9 Artificial intelligence terms No edit summary
- 19:2019:20, 5 March 2023 diff hist +105 Artificial intelligence terms No edit summary
23 February 2023
- 16:0316:03, 23 February 2023 diff hist +2,343 Improving Language Understanding by Generative Pre-Training (GPT) No edit summary
- 16:0116:01, 23 February 2023 diff hist +1,031 N Improving Language Understanding by Generative Pre-Training (GPT) Created page with "===Introduction=== In June 2018, OpenAI introduced GPT-1, a language model that combined unsupervised pre-training with the transformer architecture to achieve significant progress in natural language understanding. The team fine-tuned the model for specific tasks and found that pre-training helped it perform well on various NLP tasks with minimal fine-tuning. GPT-1 used the BooksCorpus dataset and self-attention in the transformer's decoder with 117 million parameters,..."