🌐
Langchain
reference.langchain.com › python › langgraph › graph › state › StateGraph
StateGraph | langgraph | LangChain Reference
from langchain_core.runnables import RunnableConfig from typing_extensions import Annotated, TypedDict from langgraph.checkpoint.memory import InMemorySaver from langgraph.graph import StateGraph from langgraph.runtime import Runtime def reducer(a: list, b: int | None) -> list: if b is not None: return a + [b] return a class State(TypedDict): x: Annotated[list, reducer] class Context(TypedDict): r: float graph = StateGraph(state_schema=State, context_schema=Context) def node(state: State, runtime: Runtime[Context]) -> dict: r = runtime.context.get("r", 1.0) x = state["x"][-1] next_value = x * r * (1 - x) return {"x": next_value} graph.add_node("A", node) graph.set_entry_point("A") graph.set_finish_point("A") compiled = graph.compile() step1 = compiled.invoke({"x": 0.5}, context={"r": 3.0}) # {'x': [0.5, 0.75]} The schema class that defines the input to the graph.
🌐
Langchain
docs.langchain.com › oss › python › langgraph › graph-api
Graph API overview - Docs by LangChain
from dataclasses import dataclass from typing_extensions import TypedDict from langgraph.graph import StateGraph from langgraph.runtime import Runtime class State(TypedDict): input: str results: str @dataclass class Context: user_id: str builder = StateGraph(State) def plain_node(state: State): return state def node_with_runtime(state: State, runtime: Runtime[Context]): print("In node: ", runtime.context.user_id) return {"results": f"Hello, {state['input']}!"} def node_with_execution_info(state: State, runtime: Runtime): print("In node with thread_id: ", runtime.execution_info.thread_id) return {"results": f"Hello, {state['input']}!"} builder.add_node("plain_node", plain_node) builder.add_node("node_with_runtime", node_with_runtime) builder.add_node("node_with_execution_info", node_with_execution_info) ...
Discussions

Question related to Graphs
Hi there! The issue in your graph is that it is accepting two arguments, when it should just be accepting a single state argument. You will need to modify your state in some way, perhaps by adding a question key like so: from typing import TypedDict from langgraph.graph import StateGraph, START, END from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4o-mini") class AgentState(TypedDict): question: str messages: list[str] workflow = StateGraph(AgentState) def agent(state): res = llm.invoke(f"""You are given an interaction with a user so far and the final question. Use these to decide if the user is interested in small talk or that it want to know something specifically pension related. If it's related to small talk, return "Small Talk" If it's related to pensions, return "Pension" Only return either of these values and nothing else Here is the question: {state['question']} Here is the full conversation: {state['messages']} """) return {"messages": state['messages']+[res.content]} workflow = StateGraph(AgentState) workflow.add_node("agent", agent) workflow.add_edge(START, "agent") workflow.add_edge("agent", END) graph = workflow.compile() graph.invoke({"question":"How are you?","messages":["How are you?"]}) If you want to explore more about how memory can persist, check out these guides: long term memory and persistence . In addition, be sure to read the conceptual guides, linked here to get a better understanding of how LangGraph is designed with memory in mind. More on reddit.com
🌐 r/LangChain
3
1
October 2, 2024
unhashable type: 'dict' - Help Needed
Hi - glad to hear you are playing with LangGraph! The reason this isn't working is because your state is defined as a dictionary, which is unhashable. You can workaround this by using the following code: from typing_extensions import TypedDict from langgraph.graph import StateGraph class AgentState(TypedDict): a: list b: list c: list d: list # nodes defined in between ... graph = StateGraph(AgentState) In addition, you can add different reducers to each of your state keys in order to determine how they get updated. You can read about that here . The low-level conceptual guide in general is helpful for understanding the basics of LangGraph, so I encourage you to read it if possible. I hope this helps and that you continue working with and enjoying LangGraph! More on reddit.com
🌐 r/LangChain
10
1
August 15, 2024
How to Manage State in LangGraph for Multiple Users?
hm maybe im not understanding, but typically we handle this with something like: https://langchain-ai.github.io/langgraph/how-tos/persistence/ basically, each thread is a separate conversation. so if you have a chatbot and there are multiple users, youd just create a separate thread for each user, and that state will be saved and loaded independently. unless by "multiple users" you meant multiple users in the same chat? More on reddit.com
🌐 r/LangChain
8
8
June 27, 2024
Graph vs Stategragh
Read the docs! Just kidding I don’t think they exist, or I haven’t been able to find them. StateGraph merges the state for you from node to node, and enforces types. I use it for most of my work. Graph doesn’t have these features, so you can skip or implement yourself. More on reddit.com
🌐 r/LangGraph
3
1
May 20, 2025
🌐
Medium
medium.com › ai-agents › langgraph-for-beginners-part-4-stategraph-794004555369
LangGraph for Beginners, Part 4: StateGraph. | by Santosh Rout | AI Agents | Medium
October 26, 2024 - We can’t define a schema using Graph instance. In order to define a state schema, we wil have to use StateGraph instance. In this tutorial, we will learn ... We will use Colab notebook for this. # Install LangGraph and langchain packages !pip install --quiet -U langgraph · # Import equired packages from langgraph.graph import StateGraph, START, END from typing_extensions import TypedDict
🌐
PyPI
pypi.org › project › langgraph › 0.0.23
langgraph · PyPI
There are also a few methods we've added to make it easy to use common, prebuilt graphs and components. ... This is a simple helper class to help with calling tools. It is parameterized by a list of tools: ... It then exposes a runnable interface. It can be used to call tools: you can pass in an AgentAction and it will look up the relevant tool and call it with the appropriate input. from langgraph.prebuilt import chat_agent_executor
      » pip install langgraph
    
Published   Feb 04, 2024
Version   0.0.23
🌐
LangChain
langchain-ai.github.io › langgraphjs › reference › classes › langgraph.StateGraph.html
StateGraph | LangGraph.js API Reference
Overrides Graph<N, S, U, StateGraphNodeSpec<S, U>, ToStateDefinition<C>>.constructor · Defined in libs/langgraph-core/dist/graph/state.d.ts:164
🌐
Medium
medium.com › @diwakarkumar_18755 › understanding-langgraphs-stategraph-a-simple-guide-020f70fc0038
Understanding LangGraph’s StateGraph: A Simple Guide | by Diwakar Kumar | Medium
March 24, 2025 - The following example demonstrates a simple StateGraph where: The graph starts by adding two numbers. The sum is then multiplied by 2. The final result is displayed. from langgraph.graph import StateGraph, START, END # Define the state using BaseModel class MathState(BaseModel): num1: float num2: float sum_result: float = 0 final_result: float = 0 # Define node functions async def add_numbers(state: MathState) -> MathState: state.sum_result = state.num1 + state.num2 return state async def multiply_result(state: MathState) -> MathState: state.final_result = state.sum_result * 2 return state # I
🌐
Reddit
reddit.com › r/langchain › question related to graphs
r/LangChain on Reddit: Question related to Graphs
October 2, 2024 -

Hi Guys,

I'm dabbling around in Langgraph and running into an issue at this point.

I am trying to make a first node in my graph that should decide if someone is just small talking or actually asking a RAG specific question. It should make that decision based on the question and memory. I've try to implement this and it works if do this only on the question, but id like to do it also based on memory.

Here is my implementation:

from typing import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph import Graph

class AgentState(TypedDict):
messages: list[str]
workflow = StateGraph(AgentState)

def agent(question, memory):
res = llm.invoke(f"""You are given an interaction with a user so far and the final question. Use these to decide if the user is interested in small talk or that it want to know something specifically pension related.
If it's related to small talk, return "Small Talk"
If it's related to pensions, return "Pension"
Only return either of these values and nothing else

Here is the question:
{question}

Here is the full conversation:
{memory}
""")
return res.content

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
workflow = Graph()
workflow.add_node("agent", agent)
workflow.add_edge(START, "agent")
workflow.add_edge("agent", END)
graph = workflow.compile()
graph.invoke('How are you?', 'Nothing')

this returns:
AttributeError: 'str' object has no attribute 'items'

is there an issue with what i defined in my class?

Any help would be sweet! Thanks in advance

Top answer
1 of 1
3
Hi there! The issue in your graph is that it is accepting two arguments, when it should just be accepting a single state argument. You will need to modify your state in some way, perhaps by adding a question key like so: from typing import TypedDict from langgraph.graph import StateGraph, START, END from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4o-mini") class AgentState(TypedDict): question: str messages: list[str] workflow = StateGraph(AgentState) def agent(state): res = llm.invoke(f"""You are given an interaction with a user so far and the final question. Use these to decide if the user is interested in small talk or that it want to know something specifically pension related. If it's related to small talk, return "Small Talk" If it's related to pensions, return "Pension" Only return either of these values and nothing else Here is the question: {state['question']} Here is the full conversation: {state['messages']} """) return {"messages": state['messages']+[res.content]} workflow = StateGraph(AgentState) workflow.add_node("agent", agent) workflow.add_edge(START, "agent") workflow.add_edge("agent", END) graph = workflow.compile() graph.invoke({"question":"How are you?","messages":["How are you?"]}) If you want to explore more about how memory can persist, check out these guides: long term memory and persistence . In addition, be sure to read the conceptual guides, linked here to get a better understanding of how LangGraph is designed with memory in mind.
🌐
LangChain
langchain.com › blog › langgraph
LangGraph
January 17, 2024 - We've recreated the canonical LangChain AgentExecutor with LangGraph. This will allow you to use existing LangChain agents, but allow you to more easily modify the internals of the AgentExecutor. The state of this graph by default contains concepts that should be familiar to you if you've used LangChain agents: input, chat_history, intermediate_steps (and agent_outcome to represent the most recent agent outcome) from typing import TypedDict, Annotated, List, Union from langchain_core.agents import AgentAction, AgentFinish from langchain_core.messages import BaseMessage import operator class AgentState(TypedDict): input: str chat_history: list[BaseMessage] agent_outcome: Union[AgentAction, AgentFinish, None] intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]
Find elsewhere
🌐
Medium
medium.com › @gitmaxd › understanding-state-in-langgraph-a-comprehensive-guide-191462220997
Understanding State in LangGraph: A Beginners Guide 🚀 | by Rick Garcia | Medium
August 17, 2024 - from langgraph.graph import StateGraph, END def create_simple_graph(): workflow = StateGraph(BasicState) def increment_node(state: BasicState): return {"count": state["count"] + 1} workflow.add_node("increment", increment_node) workflow.set_entry_point("increment") workflow.add_edge("increment", END) return workflow.compile()
🌐
Baihezi
baihezi.com › mirrors › langgraph › reference › graphs › index.html
Graphs - LangGraph
msgs1 = [HumanMessage(content="Hello", id="1")] msgs2 = [AIMessage(content="Hi there!", id="2")] add_messages(msgs1, msgs2) # [HumanMessage(content="Hello", id="1"), AIMessage(content="Hi there!", id="2")] msgs1 = [HumanMessage(content="Hello", id="1")] msgs2 = [HumanMessage(content="Hello again", id="1")] add_messages(msgs1, msgs2) # [HumanMessage(content="Hello again", id="1")] from typing import Annotated from typing_extensions import TypedDict from langgraph.graph import StateGraph class State(TypedDict): messages: Annotated[list, add_messages] builder = StateGraph(State) builder.add_node("chatbot", lambda state: {"messages": [("assistant", "Hello")]}) builder.set_entry_point("chatbot") builder.set_finish_point("chatbot") graph = builder.compile() graph.invoke({}) # {'messages': [AIMessage(content='Hello', id='f657fb65-b6af-4790-a5b5-1d266a2ed26e')]}
🌐
Kitemetric
kitemetric.com › blogs › visualizing-langgraph-workflows-with-get-graph
LangGraph Visualization: Mastering StateGraph | Kite Metric
Managing these chat state graphs can be complex, but LangGraph's StateGraph simplifies graph building and visualization. This post demonstrates building a custom state graph, defining nodes and edges, and visualizing it using various formats. We'll cover Mermaid PNG, ASCII, and Mermaid code representations. ... This section outlines building a basic state graph with custom nodes and transitions. from IPython.display import Image from typing_extensions import TypedDict from langgraph.graph import StateGraph, START, END class State(TypedDict): "State to manage all chats" class MyNode: "Custom Node" # Initialize the graph builder with a state builder = StateGraph(State) # Add a custom node builder.add_node("myNode", MyNode) # Define the edges for transitions builder.add_edge(START, "myNode") builder.add_edge("myNode", END) # Compile the graph chat_graph = builder.compile()
🌐
Langfuse
langfuse.com › docs › integrations › langchain › example-python-langgraph
Open Source Observability for LangGraph - Langfuse
A StateGraph object defines our chatbot's structure as a state machine. We will add nodes to represent the LLM and functions the chatbot can call, and edges to specify how the bot transitions between these functions. from typing import Annotated from langchain_openai import ChatOpenAI from langchain_core.messages import HumanMessage from typing_extensions import TypedDict from langgraph.graph import StateGraph from langgraph.graph.message import add_messages class State(TypedDict): # Messages have the type "list".
🌐
Codecademy
codecademy.com › article › building-ai-workflow-with-langgraph
LangGraph Tutorial: Complete Guide to Building AI Workflows | Codecademy
from langgraph.graph import StateGraph, END · # Define the structure of our state · class GraphState(TypedDict): """ Represents the state of our graph. Attributes: message: The welcome message. question: The follow-up question. """ message: str · question: str ·
🌐
Prem AI
blog.premai.io › langgraph-deep-dive-state-machines-tools-and-human-in-the-loop
LangGraph Deep Dive: State Machines, Tools, and Human-in-the-Loop
March 16, 2026 - from langgraph.graph import StateGraph, START, END from typing import TypedDict # 1. Define state schema class AgentState(TypedDict): messages: list # 2. Define a node (just a function) def greet(state: AgentState) -> dict: return {"messages": state["messages"] + ["Hello!"]} # 3.