GPT API: Difference between revisions

33 bytes added ,  15 July 2023
Line 71: Line 71:


Chunks are strings that starts with data: followed by an object. The first chunk looks like this:  
Chunks are strings that starts with data: followed by an object. The first chunk looks like this:  
<pre
<pre>
'data: {"id":"chatcmpl-xxxx","object":"chat.completion.chunk","created":1688198627,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}'
'data: {"id":"chatcmpl-xxxx","object":"chat.completion.chunk","created":1688198627,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}'
</pre>
</pre>
Line 78: Line 78:


One thing we lose with streaming is the usage field. So if you need to know how many tokens the request used you'll need to count them yourself.<ref name="”1”">https://gpt.pomb.us/</ref>
One thing we lose with streaming is the usage field. So if you need to know how many tokens the request used you'll need to count them yourself.<ref name="”1”">https://gpt.pomb.us/</ref>
===temperature===
===top_p===


==Response Fields==
==Response Fields==
223

edits