GPT-3: Difference between revisions

3 bytes removed ,  7 July 2023
no edit summary
No edit summary
No edit summary
 
Line 22: Line 22:


==OpenAI GPT-3 Playground==
==OpenAI GPT-3 Playground==
OpenAI made available its API service for GPT-3 on November 18, 2021. Developers can build applications based on the language model without the need to sign up for a waitlist. A simple web version of GPT-3 has also been made available, GPT-3 Playground. <ref name="”13”"> GPT-3 Demo. OpenAI GPT-3 Playground. GPT-3 Demo. https://gpt3demo.com/apps/openai-gpt-3-playground</ref> <ref name="”14”"> Wu, G (2022). How to use GPT-3 in OpenAI Playground. MSN. https://www.msn.com/en-us/news/technology/how-to-use-gpt-3-in-openai-playground/ar-AA13RZw4</ref>
OpenAI made available its API service for GPT-3 on November 18, 2021. Developers can build applications based on the language model without the need to sign up for a waitlist. A simple web version of GPT-3 has also been made available, GPT-3 Playground. <ref name="”13”"> GPT-3 Demo. OpenAI GPT-3 Playground. GPT-3 Demo. https://gpt3demo.com/apps/openai-gpt-3-playground</ref> <ref name="”14”"> Wu, G (2022). How to use GPT-3 in OpenAI Playground. MSN. https://www.msn.com/en-us/news/technology/how-to-use-gpt-3-in-openai-playground/ar-AA13RZw4</ref>


Line 50: Line 49:


==Pricing==
==Pricing==
GPT-3 service is charged per token, which is a part of a word. There’s a $18 free credit available during the first three months of use. On September 2022, OpenAI reduced the prices for the GPT-3 API service. The pricing plans can be checked at OpenAI´s website. <ref name="”9”"></ref>
GPT-3 service is charged per token, which is a part of a word. There’s a $18 free credit available during the first three months of use. On September 2022, OpenAI reduced the prices for the GPT-3 API service. The pricing plans can be checked at OpenAI´s website. <ref name="”9”"></ref>


==Limitations==
==Limitations==
Dale (2021) and Brown et al. (2020) have noted several limitations of OpenAI's language model system in text synthesis and NLP tasks. <ref name="”5”"></ref> <ref name="”11”"> Brown, TB, Mann, B, Ryder, N, Subbiah, M, Kaplan, J, Dhariwal, P, Neelakantan, A, Shyam, P, Sastry, G, Askell, A, Agarwal, S, Herbert-Voss, A, Krueger, G, Henighan, T, Child, R, Ramesh, A, Ziegler, DM, Wu, J, Winter, C, Hesse, C, Chen, M, Sigler; E, Litwin, M, Gray, S, Chess, B, Clark, J, Berner, C, McCandlish, S, Radford, A, Sutskever, H and Amodei, D (2020). Language models are few-shot learners. arXiv:2005.14165v4 </ref> Dale (2021) points out that "its outputs may lack semantic coherence, resulting in text that is gibberish and increasingly nonsensical as the output grows longer". Also, the system might take in biases that may be found in its training data and the system's outputs "may correspond to assertions that are not consonant with the truth." (5) Brown et al. (2020) also notes the loss of coherence over sufficiently long passages with contradictions and non-sequitur sentences. <ref name="”11”"></ref>
Dale (2021) and Brown et al. (2020) have noted several limitations of OpenAI's language model system in text synthesis and NLP tasks. <ref name="”5”"></ref> <ref name="”11”"> Brown, TB, Mann, B, Ryder, N, Subbiah, M, Kaplan, J, Dhariwal, P, Neelakantan, A, Shyam, P, Sastry, G, Askell, A, Agarwal, S, Herbert-Voss, A, Krueger, G, Henighan, T, Child, R, Ramesh, A, Ziegler, DM, Wu, J, Winter, C, Hesse, C, Chen, M, Sigler; E, Litwin, M, Gray, S, Chess, B, Clark, J, Berner, C, McCandlish, S, Radford, A, Sutskever, H and Amodei, D (2020). Language models are few-shot learners. arXiv:2005.14165v4 </ref> Dale (2021) points out that "its outputs may lack semantic coherence, resulting in text that is gibberish and increasingly nonsensical as the output grows longer". Also, the system might take in biases that may be found in its training data and the system's outputs "may correspond to assertions that are not consonant with the truth." (5) Brown et al. (2020) also notes the loss of coherence over sufficiently long passages with contradictions and non-sequitur sentences. <ref name="”11”"></ref>