Prompty (GPT): Difference between revisions

m
Text replacement - "K↵|Knowledge = " to ",000 |Knowledge = "
m (Text replacement - "|Updated = " to "|Hidden = |Updated = ")
m (Text replacement - "K↵|Knowledge = " to ",000 |Knowledge = ")
 
(3 intermediate revisions by 2 users not shown)
Line 12: Line 12:
|Website =  
|Website =  
|Link = https://chat.openai.com//g/g-aZLV4vji6-prompty
|Link = https://chat.openai.com//g/g-aZLV4vji6-prompty
|Chats =  
|Chats = 25,000
|Knowledge = Yes
|Actions =
|Web Browsing = Yes
|DALL·E Image Generation =
|Code Interpreter =  
|Free = Yes
|Free = Yes
|Price =  
|Price =  
|Available = Yes
|Available = Yes
|Working = Yes
|Working =  
|Hidden =  
|Hidden =  
|Updated =  
|Updated = 2024-01-23
2024-01-13
}}
}}
==Instructions (System Prompt)==
==Instructions (System Prompt)==
Line 64: Line 68:
</pre>
</pre>
*Note that there is a copy of the An Introduction to Large Language Models: Prompt Engineering and P-Tuning by Tanay Varshney and Annie Surla from the Nvidia website: https://developer.nvidia.com/blog/an-introduction-to-large-language-models-prompt-engineering-and-p-tuning/ pasted into the Instructions that we did not display above.
*Note that there is a copy of the An Introduction to Large Language Models: Prompt Engineering and P-Tuning by Tanay Varshney and Annie Surla from the Nvidia website: https://developer.nvidia.com/blog/an-introduction-to-large-language-models-prompt-engineering-and-p-tuning/ pasted into the Instructions that we did not display above.
==Conversation Starters==
==Conversation Starters==
* Optimize "What is 235 x 896?"
* Optimize "If John has 5 pears, then eats 2, and buys 5 more, then gives 3 to his friend, how many pears does he have?"


==Knowledge (Uploaded Files)==
==Knowledge (Uploaded Files)==
*'''An Introduction to Large Language Models: Prompt Engineering and P-Tuning''' (NVIDIA Technical Blog): This document provides a comprehensive introduction to LLMs, focusing on their capabilities, the concept of prompt engineering, and a technique known as P-tuning. It discusses the advantages of using LLMs over smaller model ensembles, highlighting their flexibility and ability to handle a wide range of tasks. The file elaborates on the critical role of prompts in interacting with LLMs and the importance of designing effective prompts. It also introduces P-tuning as a method to customize LLM responses efficiently.
*'''Prompt Engineering - OpenAI API''': This guide offers strategies and tactics for enhancing results from large language models like GPT-4. It presents six key strategies: writing clear instructions, providing reference text, splitting complex tasks, giving the model time to "think," using external tools, and testing changes systematically. Each strategy is broken down into specific tactics, providing practical advice on how to implement them effectively.
*'''A Complete Introduction to Prompt Engineering For Large Language Models - Mihail Eric''': This comprehensive document provides an in-depth look at prompt engineering for LLMs. It covers the fundamentals of how LLMs operate, the significance of prompt engineering, and various techniques and research findings in the field. The document also discusses the principles of few-shot and zero-shot prompting, explores automated prompt generation, and offers insights into the future of prompt engineering.
*'''A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT''': (I don't have detailed information about this file as it hasn't been explicitly referenced or described in our conversation or my knowledge base).
*'''Prompt Engineering Resources''': This text file contains a list of URLs to various resources related to prompt engineering. These resources likely include educational materials, guides, and possibly tools that assist in prompt engineering with LLMs like ChatGPT.


==Actions==
==Actions==