End to End Intuitive Guide to Building AI Agents Using LangGraph
Power of AI Agents
AI agents are transforming technology as we know it, from streaming applications to smart devices, by automating repetitive and complex workflows with intelligent AI automation tools. AI agents are no longer being developed as novelties; they are becoming critical components of digital productivity, workflow orchestration, and intelligent decision making. For developers, data scientists, and even users who do not touch code, knowledge of how to build and use an agent-based system or AI agent is important now more than ever.
LangGraph is lowering the barrier to creating agents powered by AI. Where it may have once required deep knowledge of machine learning and advanced programming to create agents, now LangGraph enables simple, modular AI frameworks. It allows beginners like you and professionals to build, design, customize, and run an AI agent of varying levels of complexity while not writing complicated code.
This guide is a practical introduction to engaging with LangGraph, from understanding state-driven AI design to loading in environmental setups, integrating external tools, and building functional AI workflows with LangGraph. With the completion of this guide, you will obtain hands-on skills and the confidence to tackle real AI projects, improve efficiency or streamline processes, and build intelligent applications that can scale.
What is an AI agent?
- AI agents are autonomous software systems that perceive their environment, make independent decisions, and take goal-directed actions without constant human intervention
- Unlike traditional software that follows fixed rules, AI agents use artificial intelligence techniques like machine learning and natural language processing to adapt their behavior in real-time based on changing conditions
- They combine reasoning capabilities with memory and learning functions, enabling them to plan multi-step workflows, collaborate with other agents, and continuously improve their performance over time
Some Real World Examples
- Customer service automation: Bank of America’s Erica resolves 80% of customer queries autonomously, handling account management, fraud detection, and personalized financial guidance
- Healthcare and manufacturing: Hospitals deploy AI agents to monitor ICU patients and predict sepsis early, while factories use them for predictive maintenance, reducing downtime by automatically scheduling repairs during low-production hours
- Supply chain and logistics optimization: Walmart uses AI agents across 1.5 million U.S. associates to automate shift planning and inventory management, while companies like AES reduced energy audit costs by 99% and cut processing time from 14 days to one hour
No More Duplicate Results: A Knowledge Graph Trick for RAG Discover how hierarchical knowledge graphs and LCA-based retrieval enable precise, context-rich AI answers in complex… ai.plainenglish.io
Currect Impact & future
- Economic impact and competitive advantage: McKinsey projects that AI agents could deliver up to $4.4 trillion annually in economic value by 2030, with 85% of enterprises already using them to drive efficiency and innovation
- Workforce transformation and productivity: Companies report up to 90% reduction in operational costs for routine tasks, allowing human workers to focus on creative and strategic activities rather than repetitive processes
- Market growth and business necessity: The AI agent market is projected to grow from $5.1 billion in 2024 to $47.1 billion by 2030, making them essential for businesses to remain competitive in an increasingly automated digital landscape
Understanding LangGraph
- LangGraph, built on LangChain, is a framework for creating AI agents that have memory and can work together. It organizes tasks using a graph-based setup.
- It combines basic components with ready-to-use parts, making it easy to add both short-term and long-term memory, include human input, stream information in real time, and set everything up using LangGraph Platform and Studio visual IDE.
- This framework supports various agent control methods, like single-agent, multi-agent, step-by-step, and group workflows. It includes error handling, automatic state saving, and the option to pause and resume at any point.
Main Concepts
- Nodes: Nodes are working units that carry out actions such as calling LLMs, running functions, or deciding next steps. Each node receives the current state, does its job, and sends an updated state to the next node.
- Edges: Edges connect nodes and manage the flow of execution. They can be direct or conditional, routing based on specified logic. They also validate data before it moves on.
- State: State is a shared data structure that acts as the workflow’s memory. It moves through the graph, carrying information between nodes, which enables automatic error recovery, maintains context across sessions, and allows all graph components to work together.
Building Your First AI Agent
Below is a guide to prepare your first langgraph based agent architecture. We are starting with a very basic version first so that we can understand nuts and bolts of Langgraph. Here we are using Groq, which provides a free API usage upto some limitations, for our usecase. Simple example below provides current weather details for a given city using Groq llama model. Obviously, it would not return anything material. Flow chart indicates this basic AI agent nodes & edges.
# ==============================
# 🔹 Imports and Environment Setup
# ==============================
import os
from langgraph import Node, Graph
from langchain_groq import ChatGroq # Groq's LLM integration for LangChain
# Set Groq API key as environment variable (replace with your own key if needed)
os.environ["GROQ_API_KEY"] = "your-groq-api-key-here"
# ==============================
# 🔹 LLM Initialization
# ==============================
# Initialize the Groq Chat model with:
# - model: Groq's LLaMA 70B distilled version
# - temperature: 0 (deterministic outputs)
# - max_tokens: capped to 100 for concise responses
llm = ChatGroq(
model="deepseek-r1-distill-llama-70b",
temperature=0,
max_tokens=100
)
# ==============================
# 🔹 Node Definitions
# ==============================
# (1) Input Node
# This node simulates user input by injecting a city name ("San Francisco") into the agent's state.
class InputNode(Node):
def run(self, state):
state['city'] = "San Francisco" # In real-world use, this would be dynamic (e.g., user query)
return state
# (2) Groq Weather Node
# This node queries the Groq LLM to generate a weather report for the given city.
class GroqWeatherNode(Node):
def run(self, state):
city = state.get('city', 'unknown location')
# Construct prompt for the LLM
prompt = f"What is the weather like today in {city}?"
# Query Groq's LLM
response = llm.invoke([{"role": "user", "content": prompt}])
# Extract the weather description from response
state['weather_report'] = response['choices'][0]['message']['content']
return state
# (3) Output Node
# This node prints the AI-generated weather report to the console.
class OutputNode(Node):
def run(self, state):
print("AI-generated Weather Report:", state.get('weather_report'))
return state
# ==============================
# 🔹 Graph Construction
# ==============================
# Build the LangGraph pipeline:
# InputNode → GroqWeatherNode → OutputNode
graph = Graph()
graph.add_node(InputNode(id="input"))
graph.add_node(GroqWeatherNode(id="groq_weather"))
graph.add_node(OutputNode(id="output"))
# Define flow between nodes
graph.add_edge("input", "groq_weather")
graph.add_edge("groq_weather", "output")
# ==============================
# 🔹 Execution
# ==============================
# Run the graph starting from the InputNode
if __name__ == "__main__":
initial_state = {}
print("Running Weather Info Agent with Groq LLM...")
graph.run(start_node="input", state=initial_state)
print("Agent run complete.")
All Things RAG: The Complete Guide To Retrieval-Augmented Generation Retrieval-Augmented Generation (RAG) is changing how AI systems understand and generate accurate, timely, and… ai.plainenglish.io
Customize Your AI Agent
We will now expand the AI agent to (a) really fetch current weather for a given city and then (b) also recommend my clothing based on the response.
# ------------------------------------------------
# Weather Agent with Clothing Recommendation Layer
# ------------------------------------------------
import os
from langgraph.graph import StateGraph, END
from langchain_groq import ChatGroq # Groq's LLM integration
# Set Groq API Key (replace with your real key or use env var)
os.environ["GROQ_API_KEY"] = "your-api-key"
# Initialize Groq LLM
llm = ChatGroq(
model="compound-beta",
temperature=0,
max_tokens=150
)
# -------------------------------
# Define State (shared between nodes)
# -------------------------------
def initial_state(city: str):
return {"city": city, "weather_report": None, "clothing_advice": None}
# -------------------------------
# Define Node Functions
# -------------------------------
# 1. Input Node
def input_node(state):
# City already passed in initial_state
print(f"Fetching weather for {state['city']}...")
return state
# 2. Weather Query Node
def groq_weather_node(state):
city = state.get("city", "unknown location")
prompt = f"What is the weather like today in {city}?"
response = llm.invoke([{"role": "user", "content": prompt}])
state["weather_report"] = response.content
return state
# 3. Clothing Recommendation Node
def clothing_recommendation_node(state):
weather = state.get("weather_report", "unknown weather")
prompt = f"The weather report is: '{weather}'. Based on this, recommend suitable clothes to wear."
response = llm.invoke([{"role": "user", "content": prompt}])
state["clothing_advice"] = response.content
return state
# 4. Output Node
def output_node(state):
print("\n--- Final Output ---")
print("Weather Report:", state.get("weather_report"))
print("Clothing Advice:", state.get("clothing_advice"))
return state
# -------------------------------
# Build Graph with LangGraph
# -------------------------------
workflow = StateGraph(dict) # our state is a dictionary
# Add nodes
workflow.add_node("input", input_node)
workflow.add_node("groq_weather", groq_weather_node)
workflow.add_node("clothing", clothing_recommendation_node)
workflow.add_node("output", output_node)
# Define flow
workflow.set_entry_point("input")
workflow.add_edge("input", "groq_weather")
workflow.add_edge("groq_weather", "clothing")
workflow.add_edge("clothing", "output")
workflow.add_edge("output", END)
# Compile into executable graph
app = workflow.compile()
# -------------------------------
# Run Agent
# -------------------------------
if __name__ == "__main__":
user_city = input("Enter city name: ") # <-- user provides city
print("Running Weather + Clothing Advice Agent with Groq LLM...")
final_state = app.invoke(initial_state(user_city))
print("Agent run complete.")
output
Ai_agent_flow.jpg
[[file:Ai_agent_flow.jpg|650px]]
Sample Output
<pre>
Enter city name: Delhi
Running Weather + Clothing Advice Agent with Groq LLM...
Fetching weather for Delhi...
--- Final Output ---
Weather Report: The weather like today in Delhi is warm and humid with a chance of rain. The maximum temperature is expected to be around 30-35°C (86-95°F), while the minimum temperature is around 25-27°C (77-81°F). There is a possibility of one or two spells of very light to light rain, with a high humidity level of around 80-90%. The wind speed is expected to be around 5-10 km/h, and the atmospheric pressure is around 1002-1005 mbar. Overall, it is expected to be a warm and humid day with a chance of rain in Delhi today.
Clothing Advice: Based on the weather report for Delhi, which indicates a warm and humid day with a chance of rain, the suitable clothes to wear would be:
**Lightweight, light-colored, and loose-fitting cotton clothes** that allow for good airflow and help keep you cool. Additionally, consider wearing **moisture-wicking fabrics** that can help manage sweat and keep you dry in the high humidity.
Given the possibility of **one or two spells of very light to light rain**, it would be a good idea to carry or wear something **waterproof or water-resistant**, such as an **umbrella** or a **lightweight rain jacket**. However, since the rain is expected to be very light to light, you may not need to wear heavy or bulky rain gear.
The **warm temperatures**, ranging from 25-27°C (77-81°F) to 30-35°C (86-95°F), and the **high humidity level of around 80-90%** suggest that you should prioritize breathable and moisture-wicking clothing to stay comfortable throughout the day.
The **low wind speed of around 5-10 km/h** and the **atmospheric pressure of around 1002-1005 mbar** do not have a significant impact on clothing choices, but they do contribute to the overall warm and humid weather conditions.
Overall, the recommended clothing for a warm and humid day in Delhi with a chance of light rain would be:
* Lightweight, light-colored, and loose-fitting cotton clothes
* Moisture-wicking fabrics
* Umbrella or lightweight rain jacket (optional)
By wearing these types of clothes, you can stay cool, dry, and comfortable despite the warm and humid weather, and be prepared for any potential light rain showers.
Agent run complete.
Dynamic Tree Reasoning with RL: Balancing Accuracy and Efficiency in Complex Question Answering This article draws insights from the paper ‘From Roots to Rewards: Dynamic Tree Reasoning with RL’ (arXiv:2507.13142)… python.plainenglish.io
Testing and Debugging
Out of the Box Debugging Tools in LangGraph You can refer to diagram below which explains all the aspects of testing in a flowchart way. State Inspection (Print / Log State): Each node receives a state dict and updates it. You can use print( ) or structured logging (i.e., logging module, rich pretty printer) within each node if you want to see what keys are being read/written:
def weather_node(state):
print("DEBUG before weather node:", state)
state["weather_report"] = call_weather_api(state["city"])
print("DEBUG after weather node:", state) return state
Graph Visualization: LangGraph provides a graph.draw(). This can be useful to visualize and check if your edges, branches or loops are correctly wired together. Dry Run Mode: Run the graph but use mocked nodes (that is, just return some dummy outputs like “Sunny, 25°C instead of calling the APIs/LLMs) to make sure the nodes are correctly wired together. This separates the flow logic as it relates to the unpredictability of the model.
Tracing and Telemetry: Use LangSmith (LangChain’s debugging/tracing platform) or OpenTelemetry to collect detailed step by step traces. This will help you understand where the state is diverging from your expectations. Breakpoints: You can make conditional breaks:
if "weather_report" not in state:
raise ValueError("Weather report missing!")
This way, if something goes wrong later on, it won’t be silent. Testing Agent Behaviors Unit Testing Nodes: Each node is basically a Python function → just test them as you normally would:
def test_weather_node():
state = {"city": "London"}
out = weather_node(state)
assert "weather_report" in out
Snapshot Testing State Evolution: You can feed in a single input into the agent + graphs and check to see if, after the agent runs, the final state meets your expectations.
final_state = graph.invoke({"city": "Tokyo"})
assert final_state["clothing_advice"] == "Wear light clothes"
Mocking LLMs: Rather than hitting Groq/LLM from each test case, you can mock responses using Python’s unittest.mock or a fake function. The advantage of this is that it also allows you to do deterministic testing.
Edge Case Testing: Try inputs that are missing or invalid inputs:
city=None weather_api returns error LLM returns some nonsensical response.
You want to check that the agent can handle gracefully (retry, fallback or provide error message).
Multi-Path Behavior Testing: If your graph has branches (e.g. if it is raining → provide umbrella advice, else → provide sunglasses advice), be sure to run your test cases for each branch so you can ensure the routing logic works as intended. Load/Stress Testing: If your graph is being invoked concurrently, make sure to simulate this loading in your tests! Use asyncio or pytest-xdist to ensure your graphs scale, and the way you manage state is thread safe.
Common Traps with LangGraph
Ambiguous Definition of State Trap New developers frequently do not define a clear definition of state. This leads to either missing, duplicate, or inconsistent keys and leads to silent errors. Example: In {“weather_report”: None} if we forget to include “city” we cause the weather node to fail.
Fix: It’s good practice to write a TypedDict or schema at the front of development, and then always check that keys correspond with state after every node.
Not Testing Incrementally Trap
New developers try to connect the nodes at once, and did not test individually. Example: you call the OpenAI API directly in your node and haven’t done any local testing (also frustrating because of all the debugging you have to do later). Fix: In each development cycle, test the node in isolation (i.e., you can use mock functions — e.g., return “Sunny” instead of using the real API response).
Do Not Utilize Logging Trap
New developers don’t log their state transitions, so they do not see where the state is failing. Example: your weather node used to output correctly, but your clothing node is failing silently (you can’t see the cause). Fix: You can use print() function or logging from Python to see state evolution. Always log afterwards each node.
Acting like Nodes are Blackboxes Trap
New developers think they are complex, and untouchable. Example: your node is just wrapping the LLM call, but you never look at the prompt or the response. Fix: A reminder: a node is nothing more than a simple Python function. Add some debug print statements, and have some test inputs.
Conclusion
LangGraph allows beginners to build AI agents without inducing “spaghetti logic,” which is a frequent occurrence when composing a series of prompts or engaging in makeshift orchestration. Because it is graph-based, it is clear to any user how the data will flow as well as how decisions are made. It is designed for Python-natives too, so you can use your tested and familiar testing and debugging styles. There are many ways to approach LangGraph, but if you choose to build components and layer them together, you will find LangGraph not only user-friendly but also robust enough to transition to your production-level projects. Most significantly, don’t worry too much about your experimentation — trust that you will learn and discover things to share in the community and you will be practicing rapid iteration. Each set of experiments will add to your intuition; LangGraph is designed to enable face-on-learning.
References
- LangGraph Documentation — https://langchain-ai.github.io/langgraph Official docs on building agent workflows, nodes, and state machines.
- LangChain Documentation — https://python.langchain.comFoundation framework for LLM applications; useful context before diving into LangGraph.
- Groq LLM API (GroqCloud) — https://groq.comHigh-speed inference API you used for weather/recommendation tools inside LangGraph.
Read the full article here: https://ai.plainenglish.io/end-to-end-intuitive-guide-to-building-ai-agents-using-langgraph-d9271811cf5b

