Building a Multi-Agent Chatbot with LangGraph: A Collaborative AI Approach
Introduction
In this rapidly progressing world of Generative AI and LLMs, we have seen models showcase broad capability and knowledge on every aspect of human life from healthcare to finance, by enabling faster decision-making, automating complex processes, and providing intelligent insights, ultimately redefining how humans interact with technology and access information.
Despite their broad knowledge , these models still have limitations in autonomy, specialization, and decision-making. In AI development, we often think of a single powerful model handling all tasks—whether answering questions, making recommendations, or generating content. However, as AI applications grow more complex, this “one-size-fits-all” approach becomes inefficient. Instead, we need multi-agent systems, where different specialized AI agents collaborate to solve tasks more effectively.
Now what are AI agents? An AI agent is an autonomous system that perceives its environment, processes information, and takes actions to achieve specific goals. Unlike traditional AI models that simply respond to user queries, agents are designed to make decisions, interact with tools or other agents, and execute complex tasks.
What Are Multi-Agent Systems?
A multi-agent system (MAS) consists of multiple AI agents that communicate, collaborate, or compete to achieve a goal. These agents can be independent or interdependent, working together in structured workflows. In the context of LLM-based AI applications, multi-agent systems improve:
- Task Specialization – Each agent focuses on a specific function, leading to more accurate and efficient responses.
- Parallel Processing – Multiple agents can operate simultaneously, reducing response times.
- Dynamic Adaptation – Agents can communicate and decide which one is best suited to handle a task.
- Scalability – As new tasks emerge, new agents can be introduced without redesigning the entire system.
Imagine planning a trip: You need an AI to find flights, another to create an itinerary, and maybe even one to suggest hotels or restaurants. If a single model tries to handle everything, it may struggle with task specialization, structured outputs, and decision-making. Instead, multi-agent systems break down complex problems into smaller, more manageable components, allowing each agent to focus on a specific role while working together as a team.
How does LangGraph fit into this?
Building a multi-agent system from scratch is complex—it requires orchestration, state management, and routing logic. LangGraph, an extension of LangChain, simplifies this by providing a framework for defining structured agent workflows using StateGraph.
LangGraph allows you to:
- Define multiple AI agents, each with specialized responsibilities.
- Create decision-making flows, so user inputs dynamically route to the right agent.
- Maintain state and conversation memory, ensuring agents work coherently.
- Design interruptible and flexible systems, allowing human intervention when needed.
Prerequisites
- Familiarity with python, LLM and prompt engineering.
- A OpenAI API key with access to GPT-4o
- A SerpAPI API key
Installing Required Libraries
Before we begin, create a python virtual environment and install the following required libraries.
Overview of the Multi-Agent Chatbot
We will create a trip planning chatbot that can help users plan an itinerary for their trip and suggest the best available flights to them. In our trip planner, we use LangGraph to orchestrate a multi-agent workflow, ensuring the right agent handles the right task at the right time. Instead of a single AI attempting to answer everything, our system intelligently routes user queries to specialized agents, making the system more scalable, efficient, and modular. This ensures if we want to add additional functionality like car rentals, or hotel bookings, we can simply add agents specialized for their tasks.
Key components
- Chatbot : This agent acts as a supervisor of all the agents, as well as the main assistant the user will interact with to explain their needs and be responsible for collecting information from the user to disperse to the appropriate agents.
- Itinerary agent : This agent specializes in creating structured travel itineraries based on user preferences. It generates a day-by-day plan, including activities, accommodations, meal suggestions, and transportation details.
- Flight agent : This agent extracts all relevant information relevant information from the chat, uses an external API (SerpAPI) to fetch real-time flight information, and provides the best available flights to the user
This multi-agent framework follows a supervisor architecture. Here is a diagram to explain the flow better –
Here is a workflow details of this chatbot –
The user starts the conversation, and the Chatbot determines the nature of the request.
If the user needs an itinerary, the Itinerary Agent generates a personalized plan.
If the user asks for flight details, the Flight Agent extracts and processes the information.
If any clarifications are required, the conversation is routed to the Human Node for human feedback, ensuring a smooth user experience.
Once all details are collected and confirmed, the chatbot presents the final structured itinerary and flight details.
Let’s dive into the implementation of this chatbot.
Diving into the Code
1. Initialize a LLM
We first initialize a LLM that will be used by our chatbot and agents. For this we will use OpenAI’s GPT-4o. According to our testing it has the best performance for agentic tasks.
2. Creating the agents
First we create a state class that will be used by the agents to keep track of all the conversation flows. When defining a graph, the first step is to define its State. The State includes the graph’s schema and reducer functions that handle state updates.The schema of the State will be the input schema to all Nodes and Edges in the graph, and can be either a TypedDict or a Pydantic model. All Nodes will emit updates to the State which are then applied using the specified reducer function. We use MessagesState to keep track of messages in our state.
- Itinerary Agent
Now we create the itinerary agent. This agent directly calls the LLM with the system prompt for itinerary planning and user-provided travel details. It returns a single response from the LLM that contains the generated itinerary.
- Flight Agent
Next we create a flight information agent. This is a ReAct-style agent (Reason + Act), meaning it can decide whether it needs to use a tool to answer the user’s query about flights.
First we set up a tool for this agent to use, which can extract relevant information from user query and structure it in a way that we can use an external API (SerpAPI) to retrieve real-time flight information.
This process involves creating a system prompt explaining what all parameters to extract to give to the API; setting up a class which inherits TypingDict to use for LLM structured outputs to make sure we get consistent JSON outputs of these parameters; initializing the LLM with this structured output; and creating a function that invokes this LLM, extracts the parameters and calls the flight API. We use the tool decorator on this function so we can use it as a ReAct agent.
Now unlike the itinerary agent, this is a ReAct agent with the ability to execute tool calls. For this we will use LangGraphs pre-built ReAct agent function to initialize our agent. This function takes in a LLM, the tool, and a prompt to guide the response
3. Creating the Supervising Chatbot
This is our brain of the multi-agent system. It not only is the interaction point of our user, but also decides whether to route the conversation to itinerary_agent, flight_agent, human user or to “finish” the conversation.
It takes the current conversation plus the chatbot instructions and requests a structured response from the LLM using the Router schema. This schema ensures that the LLM always responds with both the user message and next agent to route to.
This chatbot is set up as a Node of a graph. Nodes represent units of work. They are typically regular python functions. You will also notice that we return a Command. Command object gives us control in managing states of an agent as well as routing. It also helps in sending information/messages between each agent.
4. Creating the Agent Nodes
Now as above we will create nodes using our previously created agents.
Same as the chatbot, it returns a Command object, that directs the message back to the chatbot node for validation and response generation.
5. Setting up Human Input
We will set up another node called human_interrupt. This will be used to temporarily return control to a human user for clarification or additional input. After receiving the user input, it routes back to the chatbot node with the updated messages.
In practice, we will have to set up these user inputs in a different way based on our frontend.
6. Building the StateGraph
With this our graph is all set up with all nodes defined. Now we will use the StateGraph to combine all these elements together into a multi-agent system. A StateGraph object defines the structure of our chatbot as a “state machine”. We’ll add nodes to represent the Agents and functions our chatbot can call and edges to specify how the bot should transition between these functions.
We define the chatbot as the starting point of our conversation.
7. Running the Chatbot
We now run this chatbot. We set up a simple user input to begin with, this will make the flow of conversation very natural. We only print out the output of the chatbot, flight agent and itinerary agent as we do not want the user to see the inner routing and api response jargon.
Here is an example conversation with a chatbot helping the user plan a 5 day trip to Paris.
Conclusion
The above structure forms a robust, multi-agent chatbot using LangGraph. Each agent focuses on a specialized task—planning itineraries, fetching flight information, or simply passing control back to a human. The chatbot_node acts as the conversation orchestrator, making high-level decisions on where to route user requests. By leveraging LangGraph’s StateGraph, we maintain a clean separation of responsibilities and have a clear, visualizable workflow for development and debugging.
This modular design makes the system easy to extend:
- Adding a Hotel Agent? Just introduce a new node and route to it from chatbot_node.
- Human-in-the-loop checks? The human_interrupt node already demonstrates how to re-inject human input.
We now see how by using multi-agent architectures, we move beyond the limitations of a single LLM and create AI ecosystems where specialized agents collaborate to solve tasks efficiently. Agents don’t just generate responses—they perceive, reason, take actions, and interact with external tools and other agents. This agentic approach makes AI more dynamic, structured, and capable of handling real-world complexities.