Search code examples
node.jsexpresssocketseventsopenai-api

OpenAI Completion Stream with Node.js and Express.js


I'm trying to build a ChatGPT website clone and now I need to make the stream completion effect that shows the result word-per-word. My server is a TypeScript Node.js app that uses the Express.js framework.

Here's the route:

import express, { Request, Response } from 'express';
import cors from 'cors';
import { Configuration, OpenAIAPI } from 'openai';

// ...

app.post('/api/admin/testStream', async (req: Request, res: Response) => {
    const { password } = req.body;

    try {
        if (password !== process.env.ADMIN_PASSWORD) {
            res.send({ message: 'Incorrect password' });
            return;
        }
        const completion = await openai.createCompletion({
            model: 'text-davinci-003',
            prompt: 'Say this is a test',
            stream: true,
        }, { responseType: 'stream' });

        completion.data.on('data', (chunk: any) => {
            console.log(chunk.toString());
        });

        res.send({ message: 'Stream started' });
    } catch (err) {
        console.log(err);
        res.send(err);
    }
});

// ...

Right now, it gives me an error saying

Property 'on' does not exist on type 'CreateCompletionResponse'.ts(2339)

even if I set the { responseType: 'stream' }.

How can I solve this problem and send the response chunk-per-chunk to the frontend? (I'm using Socket.IO.)


Solution

  • Finally solved it thanks to the help of @uzluisf ! Here's what I did:

    import express, { Request, Response } from 'express';
    import cors from 'cors';
    import { Configuration, OpenAIAPI } from 'openai';
    import http, { IncomingMessage } from 'http';
    
    // ...
    
    app.post('/api/admin/testStream', async (req: Request, res: Response) => {
        const { password } = req.body;
    
        try {
            if (password !== process.env.ADMIN_PASSWORD) {
                res.send({ message: 'Incorrect password' });
                return;
            }
    
            const completion = await openai.createChatCompletion({
                model: 'gpt-3.5-turbo',
                messages: [{ role: 'user', content: 'When was America founded?' }],
                stream: true,
            }, { responseType: 'stream' });
    
            const stream = completion.data as unknown as IncomingMessage;
    
            stream.on('data', (chunk: Buffer) => {
                const payloads = chunk.toString().split("\n\n");
                for (const payload of payloads) {
                    if (payload.includes('[DONE]')) return;
                    if (payload.startsWith("data:")) {
                        const data = JSON.parse(payload.replace("data: ", ""));
                        try {
                            const chunk: undefined | string = data.choices[0].delta?.content;
                            if (chunk) {
                                console.log(chunk);
                            }
                        } catch (error) {
                            console.log(`Error with JSON.parse and ${payload}.\n${error}`);
                        }
                    }
                }
            });
    
            stream.on('end', () => {
                setTimeout(() => {
                    console.log('\nStream done');
                    res.send({ message: 'Stream done' });
                }, 10);
            });
    
            stream.on('error', (err: Error) => {
                console.log(err);
                res.send(err);
            });
        } catch (err) {
            console.log(err);
            res.send(err);
        }
    });
    
    // ...
    

    For more info, visit https://github.com/openai/openai-node/issues/18

    Also managed to send chunks of message using Socket.IO events!


    Links to example code: