370
edits
No edit summary |
|||
Line 30: | Line 30: | ||
==Token Limits== | ==Token Limits== | ||
The fragment limit for requests is contingent on the model employed | The fragment limit for requests is contingent on the [[model]] employed. | ||
===OpenAI API Token Limit=== | |||
OpenAI API has a maximum of 4097 tokens shared between the prompt and its completion. If a prompt consists of 4000 tokens, the completion can have a maximum of 97 tokens. This limitation is a technical constraint, but there are strategies to work within it, such as shortening prompts or dividing text into smaller sections. | |||
==Token Pricing== | ==Token Pricing== |
edits