All public logs
Combined display of all available logs of AI Wiki. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 15:02, 5 April 2023 Daikon Radish talk contribs created page In-context prompting (Redirected page to Prompt engineering) Tag: New redirect
- 23:01, 3 April 2023 Daikon Radish talk contribs created page AI content detector (Redirected page to AI content detectors) Tag: New redirect
- 16:34, 1 April 2023 Daikon Radish talk contribs uploaded File:Nvidia picasso2.jpg (File uploaded with MsUpload)
- 16:34, 1 April 2023 Daikon Radish talk contribs created page File:Nvidia picasso2.jpg (File uploaded with MsUpload)
- 16:33, 1 April 2023 Daikon Radish talk contribs uploaded File:Nvidia picasso1.jpeg (File uploaded with MsUpload)
- 16:33, 1 April 2023 Daikon Radish talk contribs created page File:Nvidia picasso1.jpeg (File uploaded with MsUpload)
- 16:15, 1 April 2023 Daikon Radish talk contribs created page NVIDIA Picasso (Created page with "==Introduction== NVIDIA Picasso is a cloud platform engineered to create and deploy AI-generated visual applications encompassing images, videos, and 3D content. Aimed at enterprises, software developers, and service providers, NVIDIA Picasso allows users to execute inference on their models, utilize NVIDIA Edify base models to train on proprietary data, or employ pretrained models for generating content from text inputs. The service is fully tailored for GPUs and stream...")
- 15:11, 1 April 2023 Daikon Radish talk contribs created page USM (Redirected page to Universal Speech Model) Tag: New redirect
- 15:07, 1 April 2023 Daikon Radish talk contribs uploaded File:Usm1.png (File uploaded with MsUpload)
- 15:07, 1 April 2023 Daikon Radish talk contribs created page File:Usm1.png (File uploaded with MsUpload)
- 15:07, 1 April 2023 Daikon Radish talk contribs uploaded File:Usm2.png (File uploaded with MsUpload)
- 15:07, 1 April 2023 Daikon Radish talk contribs created page File:Usm2.png (File uploaded with MsUpload)
- 15:07, 1 April 2023 Daikon Radish talk contribs created page File:Usm3.png (File uploaded with MsUpload)
- 15:07, 1 April 2023 Daikon Radish talk contribs uploaded File:Usm3.png (File uploaded with MsUpload)
- 15:04, 1 April 2023 Daikon Radish talk contribs created page Universal Speech Model (Created page with "==Universal Speech Model== The Universal Speech Model (USM) is a state-of-the-art collection of speech models with 2 billion parameters, engineered to conduct automatic speech recognition (ASR) in over 300 languages. USM has been trained using 12 million hours of spoken data and 28 billion text sentences. Developed for uses such as subtitles on YouTube, the system supports widely-used languages like English and Mandarin, as well as less common languages, encompassing...")
- 14:54, 29 March 2023 Daikon Radish talk contribs created page File:Nvidia triton2.jpg (File uploaded with MsUpload)
- 14:54, 29 March 2023 Daikon Radish talk contribs uploaded File:Nvidia triton2.jpg (File uploaded with MsUpload)
- 14:54, 29 March 2023 Daikon Radish talk contribs created page File:Nvidia triton1.jpg (File uploaded with MsUpload)
- 14:54, 29 March 2023 Daikon Radish talk contribs uploaded File:Nvidia triton1.jpg (File uploaded with MsUpload)
- 14:19, 29 March 2023 Daikon Radish talk contribs created page NVIDIA Triton Inference Server (Created page with "NVIDIA Triton Inference Server is an open-source software that standardizes model deployment and execution, providing fast and scalable AI in production environments. As part of the NVIDIA AI platform, Triton enables teams to deploy, run, and scale trained AI models from any framework on GPU- or CPU-based infrastructure, offering high-performance inference across cloud, on-premises, edge, and embedded devices. === Support for Multiple Frameworks === Triton supports...")
- 14:18, 29 March 2023 Daikon Radish talk contribs created page NVIDIA Triton (Redirected page to NVIDIA Triton Inference Server) Tag: New redirect
- 07:36, 25 March 2023 Daikon Radish talk contribs uploaded File:Github copilot chat1.jpg (File uploaded with MsUpload)
- 07:36, 25 March 2023 Daikon Radish talk contribs created page File:Github copilot chat1.jpg (File uploaded with MsUpload)
- 07:36, 25 March 2023 Daikon Radish talk contribs created page File:Github copilot x1.jpg (File uploaded with MsUpload)
- 07:36, 25 March 2023 Daikon Radish talk contribs uploaded File:Github copilot x1.jpg (File uploaded with MsUpload)
- 06:14, 23 March 2023 Daikon Radish talk contribs created page GitHub Copilot X (Created page with "GitHub Copilot X is the next step in the evolution of AI-powered software development, incorporating new features and enhancements designed to make developers more productive, efficient, and creative. By incorporating OpenAI's GPT-4 model and introducing chat and voice interfaces, GitHub Copilot X aims to redefine the developer experience throughout the entire software development lifecycle. The goal is to minimize manual tasks and boilerplate work, allowing developers t...")
- 05:07, 23 March 2023 Daikon Radish talk contribs uploaded File:Microsoft 365 copilot1.jpg (File uploaded with MsUpload)
- 05:07, 23 March 2023 Daikon Radish talk contribs created page File:Microsoft 365 copilot1.jpg (File uploaded with MsUpload)
- 05:07, 23 March 2023 Daikon Radish talk contribs uploaded File:Microsoft 365 copilot2.jpg (File uploaded with MsUpload)
- 05:07, 23 March 2023 Daikon Radish talk contribs created page File:Microsoft 365 copilot2.jpg (File uploaded with MsUpload)
- 04:55, 23 March 2023 Daikon Radish talk contribs created page Microsoft Copilot (Redirected page to Microsoft 365 Copilot) Tag: New redirect
- 01:51, 23 March 2023 Daikon Radish talk contribs created page Microsoft 365 Copilot (Created page with "Microsoft 365 Copilot, introduced on March 16, 2023, is a groundbreaking AI assistant designed to transform the way people work by increasing productivity, enhancing creativity, and improving collaboration. Developed by Microsoft, the Copilot aims to eliminate mundane tasks and maximize efficiency by harnessing the power of large language models (LLMs) and integrating with Microsoft 365 applications and data. ==Overview== Microsoft 365 Copilot is an AI-driven tool that...")
- 15:34, 9 March 2023 Daikon Radish talk contribs created page PaLM-E: An Embodied Multimodal Language Model (Created page with "{{see also|PaLM-E|Papers}} ==Explore Like I'm 5 (ELI5)== ==Abstract== Large language models have been demonstrated to perform complex tasks. However, enabling general inference in the real world, e.g. for robotics problems, raises the challenge of grounding. We propose embodied language models to directly incorporate real-world continuous sensor modalities into language models and thereby establish the link between words and percepts. Input to our embodied language mode...")
- 16:59, 6 March 2023 Daikon Radish talk contribs uploaded File:Coding model diagram1.png (File uploaded with MsUpload)
- 16:59, 6 March 2023 Daikon Radish talk contribs created page File:Coding model diagram1.png (File uploaded with MsUpload)
- 13:19, 6 March 2023 Daikon Radish talk contribs created page Zero shot, one shot and few shot learning (Created page with "{{Needs Expansion}} Zero shot learning is when you have no examples in the prompt. One shot learning is when you have 1 example in the prompt. Few shot learning is when you have a few examples in the prompt. all of these techniques allow the machine learning model to learn with limited or no labeled data. ==Zero Shot Learning== Zero-shot learning, one-shot learning, and few-shot learning are all machine learning techniques used to train model...")
- 16:01, 23 February 2023 Daikon Radish talk contribs created page Improving Language Understanding by Generative Pre-Training (GPT) (Created page with "===Introduction=== In June 2018, OpenAI introduced GPT-1, a language model that combined unsupervised pre-training with the transformer architecture to achieve significant progress in natural language understanding. The team fine-tuned the model for specific tasks and found that pre-training helped it perform well on various NLP tasks with minimal fine-tuning. GPT-1 used the BooksCorpus dataset and self-attention in the transformer's decoder with 117 million parameters,...")
- 15:04, 23 February 2023 User account Daikon Radish talk contribs was created