I am trying to use loadQAChain
with a custom prompt. The code to make the chain looks like this:
import { OpenAI } from 'langchain/llms/openai';
import { PineconeStore } from 'langchain/vectorstores/pinecone';
import { LLMChain, loadQAChain, ChatVectorDBQAChain } from 'langchain/chains';
import { PromptTemplate } from 'langchain/prompts';
const CONDENSE_PROMPT =
PromptTemplate.fromTemplate(`Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:`);
const QA_PROMPT =
PromptTemplate.fromTemplate(`You are a helpful AI assistant. Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say you don't know. DO NOT try to make up an answer.
If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.
{context}
Question: {question}
Helpful answer in markdown:`);
export const makeChain = (vectorstore: PineconeStore) => {
const questionGenerator = new LLMChain({
llm: new OpenAI({ temperature: 0 }),
prompt: CONDENSE_PROMPT,
});
const docChain = loadQAChain(
//change modelName to gpt-4 if you have access to it
new OpenAI({ temperature: 0, modelName: 'gpt-3.5-turbo' }),
{
prompt: QA_PROMPT,
}
);
return new ChatVectorDBQAChain({
vectorstore,
combineDocumentsChain: docChain,
questionGeneratorChain: questionGenerator,
returnSourceDocuments: true,
k: 4, //number of source documents to return. Change this figure as required.
});
};
I am getting the following error in Next.js whenever I call makeChain
from the API Route.
error Error: Invalid _type: undefined
at loadQAChain (webpack-internal:///(sc_server)/./node_modules/langchain/dist/chains/question_answering/load.js:31:11)
at makeChain (webpack-internal:///(sc_server)/./lib/makechain.ts:32:83)
at POST (webpack-internal:///(sc_server)/./app/api/chat/route.tsx:40:80)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (webpack-internal:///(sc_server)/./node_modules/next/dist/server/future/route-modules/app-route/module.js:265:37)
The error only happens when I pass
{
prompt: QA_PROMPT,
}
to loadQAChain
.
My package.json
looks like this:
{
"private": true,
"scripts": {
"dev": "prisma generate && next dev",
"server": "python manage.py runserver",
"build": "prisma generate && prisma db push && next build",
"start": "next start",
"lint": "next lint",
"ingest": "tsx -r dotenv/config scripts/ingest-data.ts"
},
"dependencies": {
"@microsoft/fetch-event-source": "^2.0.1",
"@pinecone-database/pinecone": "^0.1.6",
"@prisma/client": "^4.14.0",
"@radix-ui/react-accordion": "^1.1.2",
"@types/node": "^18.11.9",
"@types/react": "^18.0.25",
"bcrypt": "^5.1.0",
"clsx": "^1.2.1",
"dotenv": "^16.3.1",
"fs": "^0.0.1-security",
"langchain": "^0.0.82",
"lucide": "^0.246.0",
"lucide-react": "^0.246.0",
"next": "^13.4.2",
"next-auth": "^4.22.1",
"pdf-parse": "^1.1.1",
"pinecone": "^0.1.0",
"radix": "^0.0.0",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-hot-toast": "^2.4.1",
"react-markdown": "^8.0.7",
"sanitize-filename": "^1.6.3",
"sanitize-html": "^2.10.0",
"sass": "^1.63.4",
"tailwind-merge": "^1.13.2"
},
"devDependencies": {
"@types/bcrypt": "^5.0.0",
"autoprefixer": "^10.4.4",
"eslint": "8.11.0",
"eslint-config-next": "^13.0.5",
"postcss": "^8.4.12",
"prisma": "^4.14.0",
"tailwindcss": "^3.0.23",
"typescript": "^4.6.2"
}
}
My tsconfig.json
looks like this:
{
"compilerOptions": {
"target": "es5",
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"baseUrl": ".",
"paths": {
"@/components/*": ["components/*"],
"@/pages/*": ["pages/*"],
"@/app/*": ["app/*"],
"@/lib/*": ["lib/*"],
"@/styles/*": ["styles/*"],
"@/types/*": ["types/*"]
},
"strict": true,
"forceConsistentCasingInFileNames": true,
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"incremental": true,
"plugins": [
{
"name": "next"
}
]
},
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
"exclude": ["node_modules"]
}
My OpenAI API Key and PineCone environments are configured properly and I was able to run a database ingestion using the current environment. Any ideas?
I am not very familiar with langChain
, so it is fairly challenging to explain. I tried to use different chains but I could not get them to work. I tried removing the template and the error disappeared, but of course, this is not helpful.
With the latest version of Lanchain, not sure about .82 but update to .95.
couple of pointers: ChatVectorDBQAChain
is deprecated. Use ConversationalRetrievalQAChain
instead like so:
new ConversationalRetrievalQAChain({
retriever: vectorstore.asRetriever(NUM_SOURCE_DOCS),
combineDocumentsChain: docChain,
questionGeneratorChain: questionGenerator,
returnSourceDocuments: true
})
In loadQAChain
now they put in a mandatory check for the chain type which is why you get the error, you need to explicitly specify the chain type like so:
const docChain = loadQAChain(
new OpenAIChat({
temperature: 0,
modelName: 'gpt-3.5-turbo',
streaming: Boolean(onTokenStream),
callbacks: [
{
handleLLMNewToken(token) {
if (onTokenStream) {
onTokenStream(token);
}
}
}
]
}),
{
type: 'stuff',
prompt: PromptTemplate.fromTemplate(QAPrompt),
}
)
here the onTokenStream
is your token stream callback handler function.
Hope it helps!