I'm attempted to allow myself to text GPT3.5 and cannot figure out why I keep getting carious errors for this line.
When I use openai.chat.completions.create, as the latest documentation suggests,I get the following error:
Cannot read properties of undefined (reading 'completions')
If I use the prior format of invoking completions with openai.createCompletion, I get the following error instead:
TypeError: openai.createChatCompletion is not a function
I've read every piece of documentation I can, and tried even older ways of invoking completions, but cannot get anything to work.
I've already tried getting a new API key and doing npm update, and still get the same issue. Here's my full code, I must be missing something:
const openai = require('openai');
const accountSid = process.env.TWILIO_ACCOUNT_SID;
const authToken = process.env.TWILIO_AUTH_TOKEN;
const client = require('twilio')(accountSid, authToken);
openai.apiKey = process.env.OPENAI_AUTH,
exports.handler = async (event, context) => {
try {
const buff = Buffer.from(event.body, "base64");
const formEncodedParams = buff.toString("utf-8");
const urlSearchParams = new URLSearchParams(formEncodedParams);
const body = urlSearchParams.get("Body");
const from = urlSearchParams.get("From");
const completion = await openai.chat.completions.create({
engine: "gpt-3.5-turbo", // GPT-3.5 Turbo
messages: [
{ role: "system", content: body },
],
max_tokens: 100, // You can adjust this as needed
});
await sendMessageBack(completion.data.choices[0].message.content, from);
return {
statusCode: 200,
body: JSON.stringify('Message sent successfully'),
};
} catch (error) {
console.error(error);
return {
statusCode: 500,
body: JSON.stringify('Error sending message'),
};
}
};
async function sendMessageBack(msg, to) {
try {
await client.messages.create({
body: msg,
to: to,
from: process.env.TWILIO_PHONE_NUM,
});
console.log('Message sent:', msg);
} catch (e) {
console.error('Error sending message:', e);
}
}
I have tried updating the openai package, new invocations of completions, old invocations, reading the documentation, but AWS Lambda keeps throwing errors on that same line.
EDIT: The solutions below led to the following rewrite. Including it here in case it's helpful to anyone else:
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_AUTH,
});
import twilio from 'twilio';
const accountSid = process.env.TWILIO_ACCOUNT_SID;
const authToken = process.env.TWILIO_AUTH_TOKEN;
const client = twilio(accountSid, authToken);
export const handler = async (event, context) => {
try {
const buff = Buffer.from(event.body, "base64");
const formEncodedParams = buff.toString("utf-8");
const urlSearchParams = new URLSearchParams(formEncodedParams);
const msgBody = urlSearchParams.get("Body");
const from = urlSearchParams.get("From");
const chatCompletion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "system", content: msgBody }],
max_tokens: 100,
});
await sendMessageBack(chatCompletion.choices[0].message.content, from);
return {
statusCode: 200,
body: JSON.stringify('Message sent successfully'),
};
} catch (error) {
console.error(error);
return {
statusCode: 500,
body: JSON.stringify('Error sending message'),
};
}
};
async function sendMessageBack(msg, to) {
try {
await client.messages.create({
body: msg,
to: to,
from: process.env.TWILIO_PHONE_NUM,
});
console.log('Message sent:', msg);
} catch (e) {
console.error('Error sending message:', e);
}
}
The reason why your code is throwing Cannot read properties of undefined (reading 'completions')
is because openai.chat
was not even defined, so it's trying to access a variable or a property that has not been declared yet. That makes sense because usually libraries like this provide a base class to construct everything internally, like the OpenAI one. So you must instantiate it's original class first, like this:
Using openai:4.2.0
:
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'your api key', // defaults to process.env["OPENAI_API_KEY"]
});
rather than:
const openai = require('openai');
openai.apiKey = process.env.OPENAI_AUTH;
Because the later just imports the OpenAI package, but doesn't instantiate it, so just sets the apiKey as a variable there. The package base class knows how to do it internally, and it helps them to better maintain it.
Also, I'd like to recommend changing engine: "gpt-3.5-turbo",
to model: "gpt-3.5-turbo"
at line 18, as model
is the new way of specifiing it.