Search code examples
openai-apichatgpt-api

Chat completions /v1/chat/completions results is very different than the ChatGPT result


I find out the API /v1/chat/completions result is very different than the web page result.

This is the API response for Q: "content": "What is the birthday of George Washington"

    curl --location 'https://api.openai.com/v1/chat/completions' \
    --header 'Authorization: Bearer TOKEN' \
    --header 'Content-Type: application/json' \
    --data '{
        "model": "gpt-4",
        "messages": [
            {
                "role": "user",
                "content": "What is the birthday of George Washington"
            }
        ]
    }'
    "choices": [
            {
                "message": {
                    "role": "assistant",
                    "content": "George Washington was born on February 22, 1732."
                },
                "finish_reason": "stop",
                "index": 0
            }
        ]

And this is the result on the web page. You can see it is much longer. enter image description here


Solution

  • Unfortunately, ChatGPT-4 is not willing to spill the beans either. While it is possible to tweak the temperature via API and find a good balance, I'd be curious as well what the default temperate on Web actually is.

    Question for ChatGPT-4 via Web: What is the default temperature when using ChatGPT via web instead of the API?

    ChatGPT-4 answer: The default temperature when using ChatGPT via web interface might not be explicitly stated. However, when using OpenAI's API, the default temperature is typically set to 0.7. This value provides a good balance between creativity and coherence. You can adjust the temperature to control the randomness of the generated text: a lower temperature (e.g., 0.2) makes the output more focused and deterministic, while a higher temperature (e.g., 1.0) makes it more random and creative. Keep in mind that the web interface and the API may have different default values or behaviors.