On my NodeJS app, i have set-up a POST
method that looks like this:
exports.complete = async (req, res, next) => {\
const {prompt} = req.body
res.writeHead(200, {
'Content-Type': 'text/plain',
'Transfer-Encoding': 'chunked'
})
const result = await openai.createCompletion(
{
model: 'text-davinci-003',
prompt: prompt,
temperature: 0.6,
max_tokens: 700,
stream: true
},
{ responseType: 'stream' }
)
result.data.on('data', data => {
const lines = data
.toString()
.split('\n')
.filter(line => line.trim() !== '')
for (const line of lines) {
const message = line.replace(/^data: /, '')
if (message === '[DONE]') {
res.end()
}
try {
const parsed = JSON.parse(message)
res.write(parsed.choices[0].text)
} catch (error) {
// console.error('Could not JSON parse stream message', message, error)
}
}
})
}
As the OpenAI Node SDK doesn't natively support streaming responses, i scrapped this code from a few sources.
To some extent, it is working i guess. But when i make a call to this endpoint from Postman and also from command line (using curl
), instead of actually getting the response in chunks, i am getting a final response when the entire completion
call finishes.
I am not sure what i am doing wrong here. Note: This code above is a part of a Firebase Function call where i have set-up express
. I don't think it affects anything, but still mentioning.
Edit 1: The curl
request
Here's my curl
request:
curl --location 'http://localhost:5001/testapp-7aca9/us-central1/api/completion/complete' \
--header 'Content-Type: application/json' \
--data '{
"message": "Say two random lines"
}'
You can refer to the solution posted here: What is the correct way to send a long string by an HTTP stream in ExpressJS/NestJS?
It might be caused by 2 reasons:
curl
, you will need to add -N
flag to buffer the response to receive the response directlyX-Content-Type-Options
of nosniff