Skip to main content

Build a Chatbot

Overview

We'll go over an example of how to design and implement an LLM-powered chatbot. This chatbot will be able to have a conversation and remember previous interactions.

Note that this chatbot that we build will only use the language model to have a conversation. There are several other related concepts that you may be looking for:

  • Conversational RAG: Enable a chatbot experience over an external source of data
  • Agents: Build a chatbot that can take actions

This tutorial will cover the basics which will be helpful for those two more advanced topics, but feel free to skip directly to there should you choose.

Concepts

Here are a few of the high-level components we'll be working with:

  • Chat Models. The chatbot interface is based around messages rather than raw text, and therefore is best suited to Chat Models rather than text LLMs.
  • Prompt Templates, which simplify the process of assembling prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.
  • Chat History, which allows a chatbot to "remember" past interactions and take them into account when responding to followup questions.
  • Debugging and tracing your application using LangSmith

We'll cover how to fit the above components together to create a powerful conversational chatbot.

Setup

Jupyter Notebook

This guide (and most of the other guides in the documentation) uses Jupyter notebooks and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.

This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See here for instructions on how to install.

Installation

To install LangChain run:

npm i langchain

For more details, see our Installation guide.

LangSmith

Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.

After you sign up at the link above, make sure to set your environment variables to start logging traces:

export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="..."

Or, if in a notebook, you can set them with:

Quickstart

First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!

Pick your chat model:

Install dependencies

yarn add @langchain/openai 

Add environment variables

OPENAI_API_KEY=your-api-key

Instantiate the model

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
model: "gpt-3.5-turbo-0125",
temperature: 0
});

Help us out by providing feedback on this documentation page: