Each node will emit an update on the state when leaving the node. You can add in the return statement the field of AgentState that you need to update:
def agent_node(state, agent, name):
print("\n***************\n")
print(state)
print("\n***************\n")
result = agent.invoke(state)
return {
"messages": [HumanMessage(content=result["output"], name=name),
"resolution": True,
additional_kwargs={"intermediate_steps": result["intermediate_steps"]})]
}
Check out this link https://langchain-ai.github.io/langgraph/concepts/low_level/#state
Answer from hector on Stack Overflowlangchain - Langgraph: Add a new state in graph - Stack Overflow
Getting messages from within a tool in LangGraph
Langgraph state messages token limit
python - How to sub-class LangGraph's MessageState or use Pydantic for channel separation - Stack Overflow
Videos
Hello,
I have a graph with subggraphs, in one subgraph I call the tools inside of a node. Inside the tool itself I'm taking input from the user after I print to him what to enter and I also invoke the LLM.
-
What's the usual way of prompting the user for input? I'm a bit confused here. Let's say in production, does the print statement get shown to the user? As far as I know it's the list of messages.
-
How can I access the state from within a tool in order to update the list of messages? I'm not using a ToolNode.
The first question might seem stupid, but I really don't know. I've been stuck for a while thinking through these. No clear thoughts yet.
Thanks!
Hello, for anyone using langgraph, i am struggling with the state, on state i am using messages: Annotated[Sequence[BaseMessage],operator.add] to save the messages and pass the state to every node. Due to RAG it sometimes exceed the token limit of the llm, any idea how can i control the token limit?