Each node will emit an update on the state when leaving the node. You can add in the return statement the field of AgentState that you need to update:

def agent_node(state, agent, name):
    print("\n***************\n")
    print(state)
    print("\n***************\n")
    result = agent.invoke(state)
    return {
        "messages": [HumanMessage(content=result["output"], name=name),
        "resolution": True,
        additional_kwargs={"intermediate_steps": result["intermediate_steps"]})]
    }

Check out this link https://langchain-ai.github.io/langgraph/concepts/low_level/#state

Answer from hector on Stack Overflow
🌐
Langchain
docs.langchain.com › oss › python › langgraph › graph-api
Graph API overview - Docs by LangChain
MessagesState is defined with a single messages key which is a list of AnyMessage objects and uses the add_messages reducer. Typically, there is more state to track than just messages, so we see people subclass this state and add more fields, ...
Discussions

langchain - Langgraph: Add a new state in graph - Stack Overflow
I have been tinkering with the langgraph supervisor multi agent example: https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/agent_supervisor.ipynb. Let's say I have the below ... More on stackoverflow.com
🌐 stackoverflow.com
Getting messages from within a tool in LangGraph
This is the pattern you should be following: https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/#editing In a nutshell: you stop the graph when user input is required, add that user input to the state and resume running the graph where it left off. More on reddit.com
🌐 r/LangChain
11
3
October 25, 2024
Langgraph state messages token limit
does this help? https://langchain-ai.github.io/langgraph/how-tos/memory/manage-conversation-history/ More on reddit.com
🌐 r/LangChain
4
1
September 2, 2024
python - How to sub-class LangGraph's MessageState or use Pydantic for channel separation - Stack Overflow
I am trying to create a Hierarchical LLM Agent workflow using LangGraph. The workflow is intended to be setup where the research_team conducts the research and the writing_team writes the report. The More on stackoverflow.com
🌐 stackoverflow.com
🌐
GitHub
github.com › langchain-ai › langgraph › blob › main › libs › langgraph › langgraph › graph › message.py
langgraph/libs/langgraph/langgraph/graph/message.py at main · langchain-ai/langgraph
category=LangGraphDeprecatedSinceV10, stacklevel=2, ) super().__init__(Annotated[list[AnyMessage], add_messages]) # type: ignore[arg-type] · · class MessagesState(TypedDict): messages: Annotated[list[AnyMessage], add_messages] ·
Author   langchain-ai
🌐
Medium
medium.com › @gitmaxd › understanding-state-in-langgraph-a-comprehensive-guide-191462220997
Understanding State in LangGraph: A Beginners Guide 🚀 | by Rick Garcia | Medium
August 17, 2024 - In this article, we’ll cover the concept of state in LangGraph at an introductory level, exploring its implementation, manipulation, and practical applications.
Find elsewhere
🌐
Real Python
realpython.com › langgraph-python
LangGraph: Build Stateful AI Agents in Python – Real Python
November 15, 2024 - Each node in your agent graph will append its output to messages in MessagesState. MessagesState comes with some nice features that make creating agents easier, and you’ll see these in a moment. Also, notice that you’ve imported the ToolNode class from langgraph.prebuilt.
🌐
DeepWiki
deepwiki.com › langchain-ai › langchain-academy › 3.1-stategraph-and-messagesstate
StateGraph and MessagesState | langchain-ai/langchain-academy | DeepWiki
November 6, 2025 - This document explains the fundamental components of LangGraph: the StateGraph class for defining workflow orchestration and the MessagesState schema for managing conversation history.
🌐
Reddit
reddit.com › r/langchain › getting messages from within a tool in langgraph
r/LangChain on Reddit: Getting messages from within a tool in LangGraph
October 25, 2024 -

Hello,

I have a graph with subggraphs, in one subgraph I call the tools inside of a node. Inside the tool itself I'm taking input from the user after I print to him what to enter and I also invoke the LLM.

  1. What's the usual way of prompting the user for input? I'm a bit confused here. Let's say in production, does the print statement get shown to the user? As far as I know it's the list of messages.

  2. How can I access the state from within a tool in order to update the list of messages? I'm not using a ToolNode.

The first question might seem stupid, but I really don't know. I've been stuck for a while thinking through these. No clear thoughts yet.

Thanks!

Top answer
1 of 2
2
This is the pattern you should be following: https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/#editing In a nutshell: you stop the graph when user input is required, add that user input to the state and resume running the graph where it left off.
2 of 2
1
No such thing as a stupid question. It's good that your thinking about how each component in the stack might need to interact with each other. You're correct - typically you don't want to use print() statements for production. The standard approach in Langchain is to handle user interactions through: Message objects in the conversation history Callbacks Custom handlers Think about your architecture/design and work up from there. Start by using a structured messaging system. Here's a common approach: Define User Prompts: Use a system of messages or prompts that can be sent to the user interface. This is often handled by a frontend application that communicates with your backend logic. Collect Input: The user interface collects input and sends it back to your application, where you can process it further. To manage state and update the list of messages within a tool: State Management: Use a state management system to keep track of user inputs and conversation history. This can be a simple data structure like a dictionary or a more complex state machine. Update Messages: When a tool is invoked, you can update the state by appending new messages. Ensure that each tool has access to this shared state. Without code or further context, your question does seem focused on the design/theory at this stage but happy to help and take a look at any snippets/code bases - I would start by defining your requirements and then choosing the appropriate approach based on that.
🌐
Reddit
reddit.com › r/langchain › langgraph state messages token limit
r/LangChain on Reddit: Langgraph state messages token limit
September 2, 2024 -

Hello, for anyone using langgraph, i am struggling with the state, on state i am using messages: Annotated[Sequence[BaseMessage],operator.add] to save the messages and pass the state to every node. Due to RAG it sometimes exceed the token limit of the llm, any idea how can i control the token limit?

🌐
Aiproduct
aiproduct.engineer › tutorials › langgraph-tutorial-advanced-state-management-with-extended-fields-unit-12-exercise-1
LangGraph Tutorial: Advanced State Management with Extended Fields - Unit 1.2 Exercise 1 - AI Product Engineer
class State(TypedDict): """Enhanced state container with context management capabilities. This implementation demonstrates advanced state management by tracking: 1. Message history with proper LangGraph annotations 2. Conversation summaries for context retention 3.
🌐
Stack Overflow
stackoverflow.com › questions › 79329653 › how-to-sub-class-langgraphs-messagestate-or-use-pydantic-for-channel-separation
python - How to sub-class LangGraph's MessageState or use Pydantic for channel separation - Stack Overflow
class SupervisorInput(MessagesState): """User request.""" main_topic: Annotated[str, ..., "The main topic of the request"] section_topic: Annotated[Optional[str], "Sub-section topic of the main topic"] section_content: Annotated[Optional[str], "Sub-section topic content"] def make_supervisor_node(llm: BaseChatModel, system_prompt: str | None, members: List[str]) -> str: options = ["FINISH"] + members if system_prompt is None: system_prompt = ( "You are a supervisor tasked with managing a conversation between the" f" following teams: {members}. Given the user request," " respond with the team to act next.
🌐
Gitbook
langchain-opentutorial.gitbook.io › langchain-opentutorial › 17-langgraph › 01-core-features › 08-langgraph-state-customization
Asking Humans for Help: Customizing State in LangGraph | LangChain OpenTutorial
This tutorial demonstrates how to extend a chatbot using LangGraph by adding a "human" node, allowing the system to optionally ask humans for help. It introduces state customization with an "ask_human" flag and shows how to handle interruptions and manual state updates.