43 minute read

How to Build a Chatbot with LangChain 2026: Step-by-Step Tutorial

How to Build a Chatbot with LangChain 2026: Step-by-Step Tutorial

Imagine having a super helpful friend who can answer questions, tell stories, or even help you plan things. This friend is a chatbot! Chatbots are computer programs designed to talk with people, just like you would talk to another person. They are becoming super smart and useful in many places, from customer service to learning new things.

In this guide, you will learn how to build a chatbot that is really clever using something called LangChain, especially for the year 2026. LangChain is like a magic toolkit that makes it much easier to create these smart talking programs. We will walk you through everything, step-by-step, even if you are just starting out. Get ready to build your very own digital assistant!

Getting Ready: Your Chatbot’s Workshop (Environment Setup)

Before we start building anything cool, you need a proper workspace on your computer. Think of it like preparing your desk before starting a fun art project. Having a clean and organized setup helps everything run smoothly. You don’t want your tools getting mixed up with other projects.

Your computer needs to be ready to understand the instructions we give it. This involves making sure you have the right version of Python installed. Python is a popular programming language, like a special language your computer understands. It’s the main language we will use to build your chatbot.

What You Will Need

First, you will need a computer (Windows, Mac, or Linux will all work just fine). You also need a good internet connection to download some important parts. Think of it like needing good ingredients for a recipe. We’ll also use a special program called a text editor, which is like a fancy notebook for writing code.

You might already have Python installed, but it’s good to check. We recommend using Python version 3.9 or newer for the best experience with LangChain in 2026. You can visit the official Python website to download the latest version if you don’t have it. Just search for “download Python” on your favorite search engine.

Setting Up Python

To install Python, go to python.org/downloads. Look for the newest stable version and follow the installation instructions for your computer. Make sure to tick the box that says “Add Python to PATH” during installation, especially if you are on Windows. This step helps your computer find Python easily from anywhere.

After installing, open your computer’s command prompt (or terminal on Mac/Linux). Type python --version and press Enter. If you see a number like Python 3.10.x or Python 3.11.x, then you are all set. If not, don’t worry, just retrace your steps or search for specific installation guides for your operating system.

Creating a Special Place for Your Project (Virtual Environment)

Now, let’s create a special, clean box for your chatbot project. This is called a “virtual environment.” It’s like having a separate toolbox for each project, so tools for one project don’t mess with another. This keeps things tidy and prevents conflicts.

Open your command prompt or terminal again. First, decide where you want to keep your project on your computer, like a folder named my_langchain_chatbot. Use the cd command to go into that folder. For example, cd C:\Users\YourName\Documents\my_langchain_chatbot.

Once inside your project folder, type python -m venv venv and press Enter. This command creates your virtual environment, usually named venv. It might take a moment, but you will see a new folder appear. This venv folder contains a private copy of Python and space for our project’s tools.

Finally, you need to “activate” this special environment. On Windows, type .\venv\Scripts\activate and press Enter. On Mac or Linux, type source venv/bin/activate. You will know it worked because you’ll see (venv) appear at the start of your command line. Now your chatbot workshop is ready!

Gathering Your Tools (Installing Dependencies)

With your workspace ready, it’s time to bring in the specific tools we need. In programming, these tools are called “dependencies” or “packages.” Think of them as special ingredients or gadgets that your chatbot recipe requires to work. We need LangChain itself, plus a way for our chatbot to access a powerful “brain” to think.

What Are Dependencies?

Dependencies are pieces of code written by other people that we can use in our own programs. Instead of building everything from scratch, we stand on the shoulders of giants. This saves a lot of time and effort. For example, if you want to make a cake, you don’t invent flour, you just buy it!

For our chatbot, the most important dependency will be LangChain. But we’ll also need a way to talk to a “Large Language Model” (LLM), which is the super-smart AI part that generates text. We’ll likely use a library that connects to services like OpenAI or similar providers. Each of these is a dependency.

Installing LangChain

With your virtual environment activated (remember you should see (venv) in your command line), installing LangChain is super easy. You use a tool called pip, which is Python’s package installer. Think of pip as a magical delivery service for Python tools.

Type pip install langchain and press Enter. Pip will then go and fetch all the necessary parts of LangChain and put them into your venv folder. This process usually takes a few seconds or a minute, depending on your internet speed. You will see messages showing the progress of the installation.

LangChain is constantly being updated with new features and improvements. In 2026, it will likely have even more advanced ways to chain together different AI actions. Staying updated is key to leveraging its full power. You can always check the official LangChain documentation for the latest installation steps if anything changes.

Installing Other Helpful Tools

Besides LangChain, our chatbot needs a powerful language model to do the actual thinking and talking. For this tutorial, we will use a common choice, an OpenAI model, but many other models work just as well. To use OpenAI’s models, you need to install their special library.

Type pip install openai and press Enter. This package lets your Python code easily send questions to OpenAI’s powerful computers and get answers back. Remember, to use OpenAI’s services, you will also need an API key. An API key is like a secret password that allows your program to use their services and links to your account. You can get one from the OpenAI website after signing up.

You might also want to install python-dotenv if you want to keep your secret API keys safe and not directly in your code. This is good practice. To do this, type pip install python-dotenv. We will show you how to use it later to protect your secret keys. Keeping your keys safe is very important, just like protecting your personal passwords.

Here’s a quick summary of the installation commands:

1
2
3
4
5
# Make sure your virtual environment is activated!
# (venv)
pip install langchain
pip install openai
pip install python-dotenv

Once all these are installed, your toolbox is complete! You have the framework (LangChain) and the way to access a brain (OpenAI library). Now you’re ready to start telling your chatbot what to do. This setup is crucial for how to build chatbot langchain 2026 effectively.

Meeting Your Chatbot’s Brain (Creating Chat Models)

Every smart chatbot needs a brain, and in our case, this brain is a “chat model” or “Large Language Model” (LLM). This is the part that truly understands what you say and figures out how to respond. LangChain makes it easy to connect to these powerful brains, even different types of brains.

What is a Chat Model?

Imagine a vast library filled with almost all the text ever written on the internet. A chat model is like a super-intelligent librarian who has read and understood every single book in that library. When you ask it a question, it uses all that knowledge to give you a helpful answer. It doesn’t just look up words; it understands meaning and context.

These models are trained on huge amounts of text data, allowing them to generate human-like text, translate languages, write different kinds of creative content, and answer your questions in an informative way. In 2026, these models are even more advanced, faster, and understand nuances better than ever before. LangChain is designed to work seamlessly with many of them.

Choosing a Model

There are many chat models available, each with its strengths. Some popular ones include:

  • OpenAI’s GPT models (like GPT-4, GPT-5): Known for their strong general understanding and creative abilities. They are often a great starting point for many projects.
  • Anthropic’s Claude models: Another very capable family of models, often praised for their safety features.
  • Google’s Gemini models: Google’s powerful suite of models, offering multimodal capabilities (understanding text, images, etc.).
  • Open-source models (like Llama, Mistral): These models can be run on your own computer or servers, offering more control but sometimes requiring more technical setup.

For our step-by-step guide on how to build chatbot langchain 2026, we will use an OpenAI model because it’s widely accessible and powerful. However, the LangChain code is very flexible, so you can easily swap it out for another model later if you wish. This flexibility is one of LangChain’s superpowers.

How to Connect LangChain to This Brain

First, make sure you have your OpenAI API key ready. It’s best practice to store this in a .env file so it’s not directly in your code. Create a file named .env in your project folder and add your key like this:

1
OPENAI_API_KEY="your_secret_openai_api_key_here"

Remember to replace "your_secret_openai_api_key_here" with your actual key. Also, it is very important never to share this file or upload it to public code sites like GitHub. It’s your secret password.

Now, let’s connect LangChain to our chosen brain. Create a new Python file, for example, chatbot_brain.py. Inside this file, you will write the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# chatbot_brain.py

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage

# Load environment variables from .env file
load_dotenv()

# Get your API key from the environment
openai_api_key = os.getenv("OPENAI_API_KEY")

# Check if the API key is loaded
if not openai_api_key:
    raise ValueError("OPENAI_API_KEY not found. Please set it in your .env file.")

# Create a Chat Model instance
# We specify the model name (e.g., "gpt-4-turbo" for 2026 capabilities)
# and provide the API key.
chat_model = ChatOpenAI(
    openai_api_key=openai_api_key,
    model="gpt-4-turbo-2024-04-09", # Or a newer model like "gpt-5-turbo" if available in 2026
    temperature=0.7 # This controls how creative or random the AI's responses are. 0.0 is very focused, 1.0 is very creative.
)

print("Chat model created successfully!")

# Let's test it with a simple question!
messages = [
    SystemMessage(content="You are a helpful assistant that explains things simply."),
    HumanMessage(content="Explain photosynthesis to a 10 year old.")
]

response = chat_model.invoke(messages)
print("\n--- Model's Response ---")
print(response.content)

# Another example
messages_french = [
    SystemMessage(content="You are a helpful translator."),
    HumanMessage(content="Translate 'Hello, how are you?' to French.")
]
response_french = chat_model.invoke(messages_french)
print("\n--- French Translation ---")
print(response_french.content)

In this code:

  1. load_dotenv() helps us grab the API key from our .env file without showing it directly in our code.
  2. ChatOpenAI is the specific LangChain tool that connects to OpenAI’s chat models.
  3. We tell it which model to use (like gpt-4-turbo-2024-04-09 which is a good representative of advanced models by 2026, or a hypothetical gpt-5-turbo). You can find the latest model names on the OpenAI documentation.
  4. temperature controls how creative the model is. A lower number means more predictable answers; a higher number means more imaginative ones.
  5. invoke(messages) sends our question to the model. We wrap our question in HumanMessage to show it’s from a person, and SystemMessage can give the AI overall instructions.

This is your chatbot’s first breath of intelligence! You have successfully connected to a powerful AI brain. This part is fundamental to how to build chatbot langchain 2026, as it sets up the core intelligence. You can now save and run this file using python chatbot_brain.py in your activated virtual environment. You will see your chatbot’s responses printed on the screen.

Building the Conversation Path (Building Conversation Chains)

Now that our chatbot has a brain, we need to teach it how to have a proper conversation. Just connecting to the brain isn’t enough; we need to guide the flow of information. This is where “chains” in LangChain come into play. Chains are like recipes that tell your chatbot what steps to follow to answer a question.

What is a Chain?

Think of a chain as a series of connected actions. Each link in the chain does something specific, and together they achieve a bigger goal. For a chatbot, a chain might involve:

  1. Getting a user’s question.
  2. Sending that question to the AI brain.
  3. Getting the answer back.
  4. Maybe even doing something extra, like remembering what was said before.

LangChain helps you easily link these steps together, building complex conversation flows from simple parts. It’s a very powerful feature for how to build chatbot langchain 2026, allowing for modular and flexible design.

Simple Chains: Input -> Model -> Output

Let’s start with the simplest chain: taking input from a user, sending it to our chat model, and getting an output. LangChain has a special type of chain called LCEL (LangChain Expression Language) which makes building these chains very easy and readable.

We can combine our chat model with a “prompt template.” A prompt template is like a fill-in-the-blank card for your AI. It guides the AI on how to interpret the user’s question and what kind of answer to give.

Let’s create a new file, conversation_chain.py, and put this code inside:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# conversation_chain.py

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
    raise ValueError("OPENAI_API_KEY not found. Please set it in your .env file.")

chat_model = ChatOpenAI(
    openai_api_key=openai_api_key,
    model="gpt-4-turbo-2024-04-09",
    temperature=0.7
)

# 1. Define a Prompt Template
# This template tells the AI what kind of assistant it is and where to put the user's question.
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a friendly assistant that provides concise and helpful answers."),
    ("user", "{question}") # The user's input will go here
])

# 2. Define an Output Parser
# This helps us get a clean string out of the AI's response.
output_parser = StrOutputParser()

# 3. Build the Chain using LCEL
# We combine the prompt, the model, and the output parser using the pipe | symbol.
# This means: prompt -> chat_model -> output_parser
simple_chain = prompt | chat_model | output_parser

print("Simple chain created!")

# Now, let's use our chain!
print("\n--- Using the Simple Chain ---")
user_input_1 = "What is the capital of France?"
response_1 = simple_chain.invoke({"question": user_input_1})
print(f"You asked: {user_input_1}")
print(f"Chatbot says: {response_1}")

print("\n--- Another example ---")
user_input_2 = "Tell me a fun fact about space."
response_2 = simple_chain.invoke({"question": user_input_2})
print(f"You asked: {user_input_2}")
print(f"Chatbot says: {response_2}")

In this example, the ChatPromptTemplate makes sure our chat_model gets the user’s question wrapped in helpful instructions. The StrOutputParser then takes the model’s complex answer and just gives us the simple text. The | symbol is special in LangChain LCEL; it connects the steps, creating our chain. Run this with python conversation_chain.py.

Memory: How Your Chatbot Remembers What You Said Before

A real conversation isn’t just one question and one answer. It flows, and people remember what was said earlier. For a chatbot, this ability to remember is called “memory.” Without memory, your chatbot would forget your previous questions and answers in a chat, making it seem very unintelligent.

LangChain provides several ways to add memory to your chains. One common type is ConversationBufferMemory, which simply stores all previous messages. This allows the AI to consider the whole chat history when generating a new response. Adding memory is a key part of how to build chatbot langchain 2026 that feels natural and helpful.

Let’s modify our conversation_chain.py to include memory. We will need to install an extra package for conversational memory, called langchain-community.

1
pip install langchain-community

Now, let’s update our Python file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
# conversation_chain_with_memory.py

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
from langchain.chains import create_history_aware_retriever, create_retrieval_chain
from langchain_core.messages import AIMessage, HumanMessage # Needed for memory

# We will use ConversationBufferMemory to store chat history
from langchain.memory import ConversationBufferMemory

load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
    raise ValueError("OPENAI_API_KEY not found. Please set it in your .env file.")

chat_model = ChatOpenAI(
    openai_api_key=openai_api_key,
    model="gpt-4-turbo-2024-04-09",
    temperature=0.7
)

# Initialize ConversationBufferMemory
# This stores the chat history as messages
memory = ConversationBufferMemory(return_messages=True)

# Define a more advanced prompt template for a conversational agent
# MessagesPlaceholder tells the prompt where to put the chat history.
# We also want the agent to be aware of the *current* input.
prompt_template_with_memory = ChatPromptTemplate.from_messages([
    ("system", "You are a friendly and helpful assistant. Keep your answers concise."),
    MessagesPlaceholder(variable_name="chat_history"), # This is where memory goes!
    ("user", "{input}") # The current user input
])

# Build the conversational chain
# Instead of a simple | chain, we use the `RunnableWithMessageHistory` for memory management.
# However, for a simple introduction to memory, let's create a custom function.

# For simplicity, let's demonstrate memory by manually passing history in `invoke` for now,
# as full history management with `RunnableWithMessageHistory` is a bit more complex.
# We'll stick to a chain that _accepts_ history.

# The chain that processes input and history
chain_with_memory = prompt_template_with_memory | chat_model | StrOutputParser()

# Let's manually manage chat history for this example to show the concept clearly.
# In a real application, LangChain's `RunnableWithMessageHistory` would automate this.
chat_history = []

def run_chat_with_memory(user_input):
    global chat_history # To modify the global chat_history list

    # Prepare input for the chain, including current chat history
    response = chain_with_memory.invoke({
        "input": user_input,
        "chat_history": chat_history
    })

    # After getting a response, update the chat history
    chat_history.append(HumanMessage(content=user_input))
    chat_history.append(AIMessage(content=response))

    return response

print("Conversational chain with memory placeholder created!")

print("\n--- Chatting with Memory ---")
print("User: Hello!")
resp1 = run_chat_with_memory("Hello!")
print(f"Chatbot: {resp1}")

print("\nUser: What is the capital of France?")
resp2 = run_chat_with_memory("What is the capital of France?")
print(f"Chatbot: {resp2}")

print("\nUser: And what is its population?") # Referencing "its" from the previous turn
resp3 = run_chat_with_memory("And what is its population?")
print(f"Chatbot: {resp3}")

print("\nUser: Can you tell me another fun fact about it?")
resp4 = run_chat_with_memory("Can you tell me another fun fact about it?")
print(f"Chatbot: {resp4}")

# We can also inspect the chat_history to see what's stored
print("\n--- Current Chat History ---")
for msg in chat_history:
    print(f"{msg.type}: {msg.content}")

In this memory example, we set up a prompt_template_with_memory that includes MessagesPlaceholder(variable_name="chat_history"). This placeholder tells the AI brain to consider the conversation history. We then manually manage chat_history for demonstration. In a real-world application, LangChain’s RunnableWithMessageHistory is the more advanced and recommended way to manage this automatically, connecting it with ConversationBufferMemory or other memory types. However, this manual example clearly shows how the AI receives the past conversation.

By building these chains, you’re designing the specific actions your chatbot takes. This flexibility is what makes LangChain so powerful for how to build chatbot langchain 2026. You can create very simple chatbots or extremely complex ones with many steps. For more advanced memory types and chain configurations, you can always refer to the LangChain documentation on memory.

Teaching Your Chatbot to Listen (Adding User Input Handling)

A chatbot is useless if it can’t understand what you, the user, are trying to say. This step is about making sure your chatbot can receive and process your words properly. It’s like giving your chatbot ears and teaching it how to pay attention.

How Your Chatbot Gets What You Type

When you type something into a chat window, that text needs to be sent to your chatbot’s program. In our Python programs, we’ve been simulating this by directly passing strings like "What is the capital of France?" to our chains. But in a real application, this input would come from a user interface, like a web page or an app.

The simplest way to get input from a user in a Python script is using the input() function. This function pauses your program and waits for you to type something and press Enter. Then, whatever you typed becomes a string that your program can use. This is a great way to test your chatbot interactively from your command line.

Let’s modify our conversation_chain_with_memory.py to allow continuous user input:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# interactive_chatbot.py

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
from langchain_core.messages import AIMessage, HumanMessage

load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
    raise ValueError("OPENAI_API_KEY not found. Please set it in your .env file.")

chat_model = ChatOpenAI(
    openai_api_key=openai_api_key,
    model="gpt-4-turbo-2024-04-09",
    temperature=0.7
)

# Define the conversational prompt template
prompt_template_with_memory = ChatPromptTemplate.from_messages([
    ("system", "You are a friendly and helpful assistant. Keep your answers concise and polite."),
    MessagesPlaceholder(variable_name="chat_history"),
    ("user", "{input}")
])

chain_with_memory = prompt_template_with_memory | chat_model | StrOutputParser()

# Initialize chat history
chat_history = []

print("Welcome to your interactive LangChain chatbot! Type 'quit' or 'exit' to end the chat.")
print("---")

while True:
    user_input = input("You: ")
    if user_input.lower() in ["quit", "exit"]:
        print("Chatbot: Goodbye!")
        break

    # Process user input and get response
    response_content = chain_with_memory.invoke({
        "input": user_input,
        "chat_history": chat_history
    })

    print(f"Chatbot: {response_content}")

    # Update chat history
    chat_history.append(HumanMessage(content=user_input))
    chat_history.append(AIMessage(content=response_content))

    print("---")

Now, when you run python interactive_chatbot.py, you will see a You: prompt. You can type your questions, and the chatbot will respond, remembering what you said before! This provides a much more natural way to interact with your chatbot and test its capabilities. This is a core component for how to build chatbot langchain 2026 for real users.

Making Sure the Input is Clean and Ready for the Brain

Sometimes, what a user types isn’t perfectly clean. There might be extra spaces, weird characters, or even attempts to “trick” the chatbot. “Input preprocessing” is the step where you clean up the user’s message before sending it to the AI brain.

Common preprocessing steps include:

  • Stripping whitespace: Removing extra spaces from the beginning or end of a sentence.
  • Lowercasing: Converting all text to lowercase, so “Hello” and “hello” are treated the same.
  • Removing special characters: Taking out symbols that might confuse the AI, unless they are important (like question marks).
  • Checking for profanity: You might want to filter out rude words.

For most basic chatbots using advanced LLMs, extensive preprocessing isn’t always strictly necessary because LLMs are very good at understanding natural, even messy, human language. However, for specific applications or to prevent certain types of misuse, it can be very useful.

Let’s add a simple preprocessing step to our interactive_chatbot.py example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# interactive_chatbot_with_preprocessing.py

# ... (previous imports and setup remain the same) ...

# Inside the while True loop, before invoking the chain:
    user_input = input("You: ")
    if user_input.lower() in ["quit", "exit"]:
        print("Chatbot: Goodbye!")
        break

    # --- New: Simple Input Preprocessing ---
    cleaned_input = user_input.strip() # Remove leading/trailing whitespace
    # You could add more here, like checking for empty input:
    if not cleaned_input:
        print("Chatbot: Please type something!")
        continue # Skip to the next loop iteration

    # You could also consider a simple profanity filter if needed.
    # For example:
    # if "badword" in cleaned_input.lower():
    #     print("Chatbot: Please use polite language.")
    #     continue

    # Process the CLEANED user input and get response
    response_content = chain_with_memory.invoke({
        "input": cleaned_input, # Use the cleaned input
        "chat_history": chat_history
    })

    print(f"Chatbot: {response_content}")

    # Update chat history with the original user input, or cleaned if you prefer
    chat_history.append(HumanMessage(content=user_input)) # Storing original for context
    chat_history.append(AIMessage(content=response_content))

    print("---")

By adding cleaned_input = user_input.strip(), we ensure that any accidental spaces at the beginning or end of the user’s input are removed. This helps keep the input consistent. While advanced LLMs are robust, good input handling improves reliability and helps manage specific chatbot behaviors, which is a good practice for how to build chatbot langchain 2026.

Different Ways Users Can Talk to Your Bot

So far, we’ve imagined users typing text. But chatbots can interact in many other ways:

  • Voice: Users speak to the bot, and “speech-to-text” technology converts their voice to text.
  • Buttons/Quick Replies: In some chat interfaces, users can click on predefined buttons instead of typing.
  • Images/Files: Advanced chatbots (especially in 2026) can often process images or documents that users send.

For a text-based chatbot, you are primarily focused on handling typed input. If you want to add voice, you would integrate a speech-to-text library (like Google’s Speech-to-Text API or a local one like Vosk) before your LangChain chain. The output of the speech-to-text system would then become the user_input for your chain. This expands the reach of how to build chatbot langchain 2026. For example, check out this article on speech recognition in Python for more information.

Making Your Chatbot Talk Back (Implementing Response Generation)

After your chatbot listens, understands, and thinks, it needs to form a reply. This is called “response generation,” and it’s where the AI model truly shines. We want our chatbot’s answers to be helpful, clear, and relevant to the conversation.

How the Chatbot Creates Its Answers

Our chat model (the LLM like GPT-4) is designed to predict the next best word in a sequence. When you give it a prompt and a chat history, it uses its vast knowledge to generate text that logically follows the conversation. It’s not just picking pre-written answers; it’s creating new sentences on the fly.

LangChain helps us structure this process, ensuring the model gets all the information it needs (like the system instructions, the current question, and past chat history) to craft a good reply. The StrOutputParser() we used earlier then simply extracts this generated text for us to display. This seamless integration is critical for how to build chatbot langchain 2026 efficiently.

Using Prompts to Guide the Chatbot’s Replies

The most powerful tool you have to control your chatbot’s responses is the “prompt.” A prompt is the instruction or context you give to the AI before it generates a response. Think of it as telling an actor what role to play and what lines they need to deliver.

In our ChatPromptTemplate from earlier, we used a “system message”: ("system", "You are a friendly and helpful assistant. Keep your answers concise and polite."). This system message is crucial. It sets the tone and guidelines for the AI.

Here are ways you can use prompts effectively:

  • Define Personality: “You are a cheerful customer support agent.”
  • Set Constraints: “Answer only with facts, do not make anything up.”
  • Specify Format: “Provide your answer as a list of bullet points.”
  • Give Examples (Few-shot prompting): Provide a few example question-answer pairs to show the AI the desired style or content.

Let’s modify our prompt in interactive_chatbot.py to give our chatbot a slightly different personality.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# interactive_chatbot_with_refined_prompt.py

# ... (previous imports and setup remain the same) ...

# Define a more specific and refined prompt template
prompt_template_with_memory = ChatPromptTemplate.from_messages([
    ("system", "You are 'Professor Spark', a wise and enthusiastic science teacher for 10-year-olds. Your goal is to explain complex science topics in an easy-to-understand, fun, and encouraging way. Always use analogies and simple words. End your responses with a fun fact or a challenge question!"),
    MessagesPlaceholder(variable_name="chat_history"),
    ("user", "{input}")
])

chain_with_memory = prompt_template_with_memory | chat_model | StrOutputParser()

# ... (rest of the interactive loop remains the same) ...

Now, when you run this chatbot, it will try its best to act like “Professor Spark.” This shows how much control you have over the AI’s behavior just by changing a few sentences in the prompt. Experiment with different system messages to see how your chatbot’s personality and response style change. Crafting good prompts is an art and a science, and it’s essential for how to build chatbot langchain 2026 that meets specific needs.

Making Answers Sound Natural and Helpful

Beyond just generating text, we want the answers to feel natural and truly helpful. Here are some tips for achieving that:

  • Clarity and Conciseness: AI models can sometimes be verbose. Guide them to be concise in your prompt.
  • Avoid Repetition: If the AI repeats itself, refine the prompt or adjust the temperature (a lower temperature makes it less random and less likely to repeat).
  • Correct Tone: Does it sound too formal or too casual? Adjust the system message to specify the desired tone.
  • Handle Uncertainty Gracefully: If the bot doesn’t know an answer, it should say so politely, rather than making something up. We will cover this more in handling edge cases.

For instance, if your chatbot is too direct, you might add to the system message: “Always be polite and empathetic.” If it gives too much information, add: “Keep your answers to a maximum of two sentences.” These small tweaks in the prompt can make a huge difference in the user experience of how to build chatbot langchain 2026.

Bringing Your Chatbot to Life (Connecting to Frontend)

So far, our chatbot lives in your computer’s command line. While great for testing, most people want to talk to a chatbot through a website or a mobile app. This is called the “frontend” – the part of the program that users actually see and interact with. Connecting your Python chatbot to a frontend means making it accessible to everyone.

Where People Will Talk to Your Chatbot

Imagine a customer service website, an educational portal, or even a fun app. These are all places where your chatbot could live. The frontend is the visual interface that takes user input (like typing a message) and displays the chatbot’s response. It’s the face of your how to build chatbot langchain 2026 project.

For a web-based chatbot, the frontend is typically built using web technologies like HTML, CSS, and JavaScript. For mobile apps, you might use languages like Swift (for iPhones) or Kotlin (for Android phones), or cross-platform tools like React Native or Flutter.

Simple Ways to Connect (Like a Web Page)

For beginners, connecting a Python backend (where your LangChain chatbot lives) to a simple web frontend can be done using web frameworks. These frameworks help you create web pages and handle requests from a browser.

Two popular Python web frameworks are:

  • Flask: A lightweight and easy-to-use framework for building web applications.
  • Streamlit: A fantastic tool specifically designed for easily creating interactive web apps for data science and machine learning projects, often used for quick prototypes.

Let’s use Flask to create a very simple web interface for our chatbot. This will involve:

  1. Creating a Flask web server.
  2. Making a simple HTML page with an input box and a display area for messages.
  3. Writing a little JavaScript to send user messages to the Flask server and receive replies.

First, install Flask:

1
pip install Flask

Now, let’s create two files: app.py (for the Flask server) and templates/index.html (for the web page). Create a new folder named templates in your project directory, and put index.html inside it.

app.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# app.py

import os
from dotenv import load_dotenv
from flask import Flask, render_template, request, jsonify
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
from langchain_core.messages import AIMessage, HumanMessage

app = Flask(__name__)

# --- LangChain Chatbot Setup (same as before) ---
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
    raise ValueError("OPENAI_API_KEY not found. Please set it in your .env file.")

chat_model = ChatOpenAI(
    openai_api_key=openai_api_key,
    model="gpt-4-turbo-2024-04-09",
    temperature=0.7
)

prompt_template_with_memory = ChatPromptTemplate.from_messages([
    ("system", "You are 'Professor Spark', a wise and enthusiastic science teacher for 10-year-olds. Your goal is to explain complex science topics in an easy-to-understand, fun, and encouraging way. Always use analogies and simple words. End your responses with a fun fact or a challenge question!"),
    MessagesPlaceholder(variable_name="chat_history"),
    ("user", "{input}")
])

chain_with_memory = prompt_template_with_memory | chat_model | StrOutputParser()

# In a real app, you'd manage session-specific chat histories.
# For this simple example, we'll keep a global history (not suitable for multiple users).
# For multiple users, you'd use Flask session, a database, or a more advanced state management.
global_chat_history = [] 

@app.route('/')
def index():
    """Renders the main chat interface."""
    return render_template('index.html')

@app.route('/chat', methods=['POST'])
def chat():
    """Handles chat messages from the frontend."""
    global global_chat_history
    user_message = request.json.get('message')

    if not user_message:
        return jsonify({"response": "Please enter a message."}), 400

    # Basic preprocessing
    cleaned_input = user_message.strip()
    if not cleaned_input:
        return jsonify({"response": "Please type something!"}), 200 # Send an empty response

    # Invoke the LangChain with current chat history
    response_content = chain_with_memory.invoke({
        "input": cleaned_input,
        "chat_history": global_chat_history
    })

    # Update chat history for the next turn
    global_chat_history.append(HumanMessage(content=user_message))
    global_chat_history.append(AIMessage(content=response_content))

    return jsonify({"response": response_content})

if __name__ == '__main__':
    app.run(debug=True) # debug=True means the server will reload when you make changes

templates/index.html:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>LangChain Chatbot</title>
    <style>
        body { font-family: Arial, sans-serif; margin: 20px; background-color: #f4f4f4; }
        .chat-container { max-width: 600px; margin: 0 auto; background-color: #fff; border-radius: 8px; box-shadow: 0 2px 10px rgba(0,0,0,0.1); display: flex; flex-direction: column; height: 70vh; }
        .chat-history { flex-grow: 1; padding: 20px; overflow-y: auto; border-bottom: 1px solid #eee; }
        .chat-input { display: flex; padding: 15px; border-top: 1px solid #eee; }
        .chat-input input[type="text"] { flex-grow: 1; padding: 10px; border: 1px solid #ddd; border-radius: 20px; margin-right: 10px; font-size: 16px; }
        .chat-input button { background-color: #007bff; color: white; border: none; padding: 10px 20px; border-radius: 20px; cursor: pointer; font-size: 16px; }
        .chat-input button:hover { background-color: #0056b3; }
        .message { margin-bottom: 10px; line-height: 1.5; }
        .message.user { text-align: right; color: #333; }
        .message.bot { text-align: left; color: #007bff; }
        .message span { display: inline-block; padding: 8px 12px; border-radius: 15px; max-width: 80%; }
        .message.user span { background-color: #e6f3ff; }
        .message.bot span { background-color: #f0f0f0; }
    </style>
</head>
<body>
    <div class="chat-container">
        <div class="chat-history" id="chat-history">
            <!-- Messages will appear here -->
            <div class="message bot"><span>Hello! I am Professor Spark. Ask me anything about science!</span></div>
        </div>
        <div class="chat-input">
            <input type="text" id="user-message" placeholder="Type your message here..." onkeyup="handleKeyPress(event)">
            <button onclick="sendMessage()">Send</button>
        </div>
    </div>

    <script>
        async function sendMessage() {
            const userMessageInput = document.getElementById('user-message');
            const message = userMessageInput.value.trim();

            if (message === '') {
                return;
            }

            // Display user message
            appendMessage('user', message);
            userMessageInput.value = ''; // Clear input

            // Send message to Flask backend
            try {
                const response = await fetch('/chat', {
                    method: 'POST',
                    headers: {
                        'Content-Type': 'application/json'
                    },
                    body: JSON.stringify({ message: message })
                });
                const data = await response.json();
                appendMessage('bot', data.response);
            } catch (error) {
                console.error('Error sending message:', error);
                appendMessage('bot', 'Oops! Something went wrong. Please try again.');
            }
        }

        function appendMessage(sender, text) {
            const chatHistory = document.getElementById('chat-history');
            const messageDiv = document.createElement('div');
            messageDiv.classList.add('message', sender);
            
            const textSpan = document.createElement('span');
            textSpan.textContent = text;
            messageDiv.appendChild(textSpan);

            chatHistory.appendChild(messageDiv);
            chatHistory.scrollTop = chatHistory.scrollHeight; // Scroll to bottom
        }

        function handleKeyPress(event) {
            if (event.key === 'Enter') {
                sendMessage();
            }
        }
    </script>
</body>
</html>

To run this:

  1. Save app.py in your project folder.
  2. Create a folder named templates in the same project folder.
  3. Save index.html inside the templates folder.
  4. Make sure your venv is activated.
  5. Run python app.py.
  6. Open your web browser and go to http://127.0.0.1:5000/.

You will now see a simple chat interface! Type a message, click Send, and your LangChain chatbot will reply directly in the web page. This is a practical example of how to build chatbot langchain 2026 and bring it to a visual interface. Remember, the global_chat_history is a simplification for a single user in this example. For a multi-user application, you would need to manage chat history per user session, perhaps using Flask’s session object or a database.

Checking Your Chatbot’s Smarts (Testing Conversation Flow)

You’ve built a chatbot, connected it to a brain, given it memory, and even put a face on it! But how do you know if it’s actually smart and helpful? This is where testing comes in. Testing is like trying out your new toy to make sure all its buttons work and it does what it’s supposed to do.

Why Testing is Super Important

Testing helps you find problems before your users do. It ensures your chatbot understands different types of questions, remembers past conversations, and gives appropriate answers. Without testing, your chatbot might make mistakes, give irrelevant information, or even crash. Regular testing is a crucial part of how to build chatbot langchain 2026 that is reliable.

Imagine building a car without testing the brakes! It would be very dangerous. Similarly, a chatbot that hasn’t been tested thoroughly can frustrate users and provide bad experiences. It helps you improve your chatbot over time.

How to Talk to Your Chatbot and See If It Works

The interactive console script (interactive_chatbot.py) and the Flask web interface we just built are both great ways to manually test your chatbot. Just by talking to it, you can get a feel for its capabilities.

Here are some things to try when testing:

  • Simple Questions: “What is the capital of Japan?”
  • Follow-up Questions (using memory): “What is its population?” or “Tell me a fun fact about it.”
  • Complex Questions: “Can you explain the theory of relativity in simple terms for a child?”
  • Edge Cases: Questions that might be tricky or ambiguous (we’ll cover these next).
  • Different Phrasing: Ask the same question in several different ways to see if the bot still understands.
  • Off-topic Questions: See how it responds to things outside its intended scope.

As you test, keep a notebook (digital or physical) and jot down:

  • The question you asked.
  • The chatbot’s exact response.
  • Whether the response was good, bad, or okay.
  • Why it was good or bad (e.g., “forgot previous context,” “gave too much detail,” “was irrelevant”).

This feedback helps you improve your prompts, adjust your model’s temperature, or even add new features to your chain.

Looking for Common Mistakes

When testing, watch out for these common issues:

  • Hallucinations: The chatbot makes up facts or information that isn’t true. This can be a major problem, especially for factual queries.
  • Repetitive Answers: The chatbot keeps saying the same thing or similar phrases.
  • Forgetting Context: It doesn’t remember what was said just a few turns ago, even with memory enabled.
  • Irrelevant Responses: The answer has nothing to do with the question.
  • Offensive Language: Though less common with modern safety-trained LLMs, always check for inappropriate or biased responses.
  • Getting Stuck: The chatbot enters a loop or gives a generic “I don’t understand” without trying to clarify.
  • Slow Responses: The chatbot takes too long to reply, which can frustrate users.

If you find these mistakes, don’t worry! It’s normal during development. You can often fix them by:

  • Refining your system prompt: Make instructions clearer, add constraints.
  • Adjusting temperature: Lower for more factual, higher for more creative.
  • Checking memory configuration: Ensure chat_history is being passed correctly.
  • Reviewing the model: Sometimes a different model might perform better for your specific task.
  • Adding safety layers: Extra code to filter harmful content.

For more complex applications, you might even write automated tests using Python’s unittest or pytest frameworks. These tests can send predefined questions to your chatbot and automatically check if the responses match what you expect. This is a more advanced technique but very valuable for maintaining a high-quality chatbot over time, especially for complex how to build chatbot langchain 2026 projects.

What If? Dealing with Tricky Situations (Handling Edge Cases)

Even the smartest chatbot can get confused sometimes. “Edge cases” are those unusual or unexpected inputs from users that can make your chatbot stumble. Handling them gracefully is what separates a good chatbot from a frustrating one. It’s about teaching your chatbot to be robust, even when facing weird questions.

When Your Chatbot Might Get Confused

Imagine someone asking your “Professor Spark” chatbot: “How many spoons are in a rainbow?” This is a nonsensical question, and a good chatbot shouldn’t try to answer it literally. Other confusing inputs might include:

  • Ambiguous Questions: “Tell me about ‘it’.” (If “it” hasn’t been defined).
  • Out-of-Scope Questions: Asking a science chatbot for movie recommendations.
  • Very Long/Complex Inputs: A user types a whole paragraph of unrelated thoughts.
  • Nonsensical/Gibberish Input: Just random letters or symbols.
  • Requests for Personal Information: “What’s your name? Where do you live?”

Without special handling, your chatbot might generate a silly answer, a generic “I don’t know,” or even an error. Preparing for these situations is key to how to build chatbot langchain 2026 that feels intelligent and user-friendly.

How to Help Your Chatbot Understand Strange Questions

You can build special “guardrails” around your LangChain to deal with edge cases. These can be done through:

  • Prompt Engineering: The easiest and often most effective method.
  • Pre-processing/Post-processing: Adding Python code before or after the LangChain chain.
  • Specialized Chains: Using LangChain’s more advanced features like “routing” or “tool-use.”

1. Prompt Engineering for Edge Cases:

You can add instructions to your system prompt to guide the chatbot on how to handle difficult questions.

Let’s refine Professor Spark’s prompt in app.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# app.py (updated system prompt)

# ... (previous setup) ...

prompt_template_with_memory = ChatPromptTemplate.from_messages([
    ("system", """You are 'Professor Spark', a wise and enthusiastic science teacher for 10-year-olds.
    Your goal is to explain complex science topics in an easy-to-understand, fun, and encouraging way.
    Always use analogies and simple words. End your responses with a fun fact or a challenge question!

    Important Rules:
    - If the question is not related to science or doesn't make sense, politely say you can only discuss science topics.
    - If the question is too vague, ask for clarification.
    - Never make up facts. If you don't know the answer, say so.
    """), # Added rules for edge cases
    MessagesPlaceholder(variable_name="chat_history"),
    ("user", "{input}")
])

# ... (rest of the Flask app code) ...

Now, if you ask “Professor Spark” about a movie, it should politely redirect you to science topics. This is a simple yet powerful way to manage common edge cases.

2. Pre-processing for Specific Inputs:

You can add Python code before calling your LangChain chain to detect specific problematic inputs.

1
2
3
4
5
6
7
8
9
10
11
12
# app.py (in the /chat route, inside the `if not cleaned_input:` block)

    # ... (previous checks for empty input) ...

    # Check for direct queries about itself (chatbot identity)
    if any(keyword in cleaned_input.lower() for keyword in ["who are you", "what is your name", "what do you do"]):
        return jsonify({"response": "I am Professor Spark, a friendly science teacher designed to help you learn!"}), 200

    # Add more specific checks here if needed, e.g., for known offensive phrases.
    # If a dangerous input is detected, you could respond with a safety message.

    # ... (rest of the LangChain invocation) ...

This pre-processing can provide quick, hardcoded answers for very common questions that don’t need the full power of the LLM, or to block certain types of input.

3. Advanced LangChain Routing (for 2026):

For more complex edge cases, especially if your chatbot needs to do different things based on the input, LangChain’s “routing” capabilities become very useful. You can have a small AI model (a “router”) first decide what kind of question the user is asking.

For example:

  • If it’s a science question, send it to “Professor Spark.”
  • If it’s a greeting, send it to a “greeting chain” that just says “Hello!”
  • If it’s off-topic, send it to an “off-topic handler” that politely redirects.

This involves building multiple chains and using a RunnableBranch or similar LangChain construct. This is a more advanced topic but highly relevant for robust how to build chatbot langchain 2026. For detailed examples, refer to the LangChain documentation on routing.

Launching Your First Chatbot (Launching First Chatbot)

You’ve built and tested your chatbot, and it’s doing great! Now, you probably want to share it with friends, family, or even the whole internet. This step is about taking your chatbot from your computer to a place where others can access it. This process is called “deployment.”

How to Put Your Chatbot Online for Others to Use

When you run python app.py on your computer, your Flask web server is only visible to you (at http://127.0.0.1:5000/). To make it available to others, you need to host it on a public server. There are many “cloud providers” that offer this service.

Popular choices include:

  • Heroku: Easy to start with for smaller projects, though their free tier has changed.
  • Render: A modern alternative often praised for its developer experience and free tiers for web services.
  • Google Cloud Platform (GCP), Amazon Web Services (AWS), Microsoft Azure: Powerful, but can be more complex and costly for beginners.
  • Vercel / Netlify: Excellent for static frontends, but can also host serverless functions which might wrap your chatbot logic.

For a simple Flask app like ours, a service like Render or a basic Virtual Private Server (VPS) would be a good starting point. You would typically need to:

  1. Prepare your code: Make sure all dependencies are listed in a requirements.txt file (pip freeze > requirements.txt).
  2. Create a server entry point: For Flask, app.py usually serves this purpose. You might use a production-ready web server like Gunicorn (pip install gunicorn).
  3. Choose a cloud provider: Sign up and create a new web service.
  4. Connect your code: Link your cloud account to your project’s code repository (like GitHub).
  5. Configure deployment: Tell the cloud provider how to run your Python app (e.g., gunicorn app:app).

Let’s assume you’re deploying to Render for this example, as it’s user-friendly. You would:

  1. Push your project (all files: app.py, templates/index.html, .env, requirements.txt) to a GitHub repository.
  2. Go to render.com, sign up, and create a “New Web Service.”
  3. Connect to your GitHub repo.
  4. Configure the build command (e.g., pip install -r requirements.txt) and start command (e.g., gunicorn app:app).
  5. Set your OPENAI_API_KEY as a secret environment variable in Render’s dashboard (never commit .env to GitHub!).

Render will then automatically build and deploy your chatbot. Once it’s deployed, Render will give you a public URL where anyone can access your chatbot. This is a big step for how to build chatbot langchain 2026 and share it with the world!

Keeping Your Chatbot Updated and Safe

Launching your chatbot isn’t the end; it’s just the beginning! Here are important things to consider for ongoing maintenance:

  • Monitoring: Keep an eye on your chatbot’s performance. Is it crashing? Are users complaining about bad answers? Cloud providers often offer tools to monitor your application’s health.
  • Updates: LangChain and the underlying AI models are constantly improving. Keep your langchain, openai, and other libraries updated by periodically running pip install --upgrade <package_name>. Regularly review the LangChain documentation for new features and best practices for how to build chatbot langchain 2026.
  • Security:
    • API Keys: Never hardcode API keys in your code or commit them to public repositories. Always use environment variables (like with .env and os.getenv()).
    • Rate Limits: Be aware of the usage limits of AI models. If your chatbot gets too popular, you might hit these limits, leading to errors or higher costs. Plan for scaling.
    • Input Validation: Continue to refine input handling to prevent “prompt injection” (where users try to trick the AI into doing something it shouldn’t).
  • Costs: Using AI models incurs costs. Monitor your API usage on platforms like OpenAI to avoid unexpected bills. Start with smaller, cheaper models if possible, and scale up as needed.

By following these steps, you not only launch your first chatbot but also ensure it remains a reliable and helpful assistant. Congratulations on taking your how to build chatbot langchain 2026 project from an idea to a live application!

Looking Ahead: The Future of Chatbots in 2026 and Beyond

The world of chatbots and AI is moving incredibly fast. What was cutting-edge last year is standard today. In 2026, LangChain and the underlying AI models will likely offer even more amazing capabilities. You’ve learned the fundamental steps, but here’s a peek at what might be next.

You might see more “multimodal” chatbots that can not only understand text but also see images, hear sounds, and even generate videos. LangChain is constantly evolving to integrate these new types of AI, allowing you to build chatbots that interact with the world in richer ways. Think of a chatbot that can analyze a photo you take and answer questions about it.

The ability for chatbots to use “tools” will also become even more sophisticated. Your chatbot could automatically search the web for the latest information, book appointments, or even control smart home devices, all through natural conversation. LangChain is at the forefront of enabling these “agentic” capabilities. These advancements will make how to build chatbot langchain 2026 even more powerful and versatile.

Finally, the focus on “personalization” will grow. Chatbots will learn more about individual users over time, remembering preferences and tailoring interactions to be uniquely helpful for each person. This could mean highly specialized assistants that truly understand your needs and context. The foundational skills you’ve learned here will be invaluable for exploring these exciting future possibilities.

Conclusion

You’ve made an incredible journey, from setting up your computer to deploying your very own LangChain chatbot! You learned how to prepare your workspace, install the necessary tools, connect to a powerful AI brain, and guide its conversation using chains and memory. You also learned how to make your chatbot listen, talk back, and even how to connect it to a web page so others can use it.

Remember that building chatbots is a fun and continuous learning process. The world of AI is always changing, and there’s always something new to learn. Don’t be afraid to experiment with different prompts, explore new LangChain features, or try connecting to different AI models. The steps on how to build chatbot langchain 2026 you’ve mastered today are the building blocks for countless creative projects.

Keep practicing, keep building, and soon you’ll be creating even more amazing conversational AI experiences. Your first chatbot is just the beginning of what you can achieve! Happy chatting!

Leave a comment