Langchain
docs.langchain.com โบ oss โบ python โบ langchain โบ agents
Agents - Docs by LangChain
March 29, 2026 - Without it, the agent wonโt know how to invoke the dynamically added tool. To learn more about tools, see Tools. To customize how tool errors are handled, use the @wrap_tool_call decorator to create middleware: from langchain.agents import create_agent from langchain.agents.middleware import wrap_tool_call from langchain.messages import ToolMessage @wrap_tool_call def handle_tool_errors(request, handler): """Handle tool execution errors with custom messages.""" try: return handler(request) except Exception as e: # Return a custom error message to the model return ToolMessage( content=f"Tool error: Please check your input and try again.
Aurelio
aurelio.ai โบ learn โบ langchain-agent-executor
LangChain Agent Executor Deep Dive | Aurelio AI
We've worked through each step of our agent code, but it doesn't run without us running every step. We must write a class to handle all the logic we just worked through. ... from langchain_core.messages import BaseMessage, HumanMessage, AIMessage class CustomAgentExecutor: chat_history: list[BaseMessage] def __init__(self, max_iterations: int = 3): self.chat_history = [] self.max_iterations = max_iterations self.agent: RunnableSerializable = ( { "input": lambda x: x["input"], "chat_history": lambda x: x["chat_history"], "agent_scratchpad": lambda x: x.get("agent_scratchpad", []) } | prompt | l
How to incorporate system message for agent.invoke() function
As long as you create your LLM using the chat interface (ChatOpenAI() instead of OpenAI()), you can choose to pass a message (list of dictionaries with role and content) instead of just a text query. More on reddit.com
Langchain agent responding when it should just be thinking.
Based on the prompt you provided it makes sense that message is in the intermediate steps. This is because you are asking the LLM to confirm and then call the tools with might be happening in the same AI message. Maybe try adjusting the prompt and force the LLM that once the user has answered all of the question, it should call the tools and then respond with โit sounds like you are ready to cross the bridgeโฆโ plus the available slots from the tool. Not sure this makes sense but please let me know and I could elaborate better. More on reddit.com
How does this LangChain agent correctly identify the tool to use?
It's based on the tool description I believe. I don't think at the "tool selection" stage there is a built in function to cross reference the index in the financial statement tools, so it is evaluating the query against the tool description, which only describes the tools as having financial data, not data pertaining to the board. You could try and expand the example by putting a more detailed description in the tool description param and see how the tool selection differs. More on reddit.com
Why is my chain.invoke({}) command giving the full model response instead of just AIMessage(content=' ')
Just read the source for langchain. If you donโt like the output, override the functions ๐คท๐ปโโ๏ธ More on reddit.com
Videos
53:20
LangChain Full Crash Course - AI Agents in Python - YouTube
34:54
LangChain Agent Executor Deep Dive | Walkthrough for 2025 - YouTube
18:36
Langchain AI Agents Tool Calling | Deep Dive with Examples - YouTube
24:20
Build a voice agent with LangChain - YouTube
21:29
LangChain Agents in 2025 | Full Tutorial for v0.3 - YouTube
30:44
LangChain V1 Tutorial: Build AI Agents Step-by-Step - YouTube
Medium
nakamasato.medium.com โบ langchain-how-an-agent-works-7dce1569933d
LangChain: How an Agent works. Deep dive into Agent and AgentExecutor | by Masato Naka | Medium
March 28, 2024 - Consequently, it employs the Agent to obtain the next action, executes the returned action iteratively, and continues this process until a conclusive answer is generated for the given input. Letโs delve into a simple example to illustrate the process! from langchain_openai import OpenAI import langchain from langchain import hub from langchain.agents import AgentExecutor, create_react_agent, Tool from langchain_community.utilities import GoogleSearchAPIWrapper # use google as a tool google = GoogleSearchAPIWrapper() def top5_results(query): return google.results(query, 5) TOOL_GOOGLE = Tool( name="google-search", description="Search Google for recent results.", func=top5_results, ) tools = [TOOL_GOOGLE] # prompt prompt = hub.pull("hwchase17/react-chat") # llm llm = OpenAI(temperature=0) # agent agent = create_react_agent( llm=llm, tools=tools, prompt=prompt, )
DataCamp
datacamp.com โบ tutorial โบ building-langchain-agents-to-automate-tasks-in-python
Building LangChain Agents to Automate Tasks in Python | DataCamp
August 28, 2024 - Build powerful multi-agent systems by applying emerging agentic design patterns in the LangGraph framework. ... Before we get into anything, letโs set up our environment for the tutorial. ... Testing that everything is working correctly by querying GPT-3.5 (the default language model) of OpenAI: from langchain_openai import OpenAI llm = OpenAI(openai_api_key=api_key) question = "Is Messi the best footballer of all time?" output = llm.invoke(question) print(output[:75])
Medium
medium.com โบ @shravankoninti โบ agent-tools-basic-code-using-langchain-50e13eb07d92
Agent & Tools โ Basic Code using LangChain | by Shravan Kumar | Medium
August 25, 2024 - I have written one more article on Overview on AI Agents and different types of Agentic Frameworks to work with and it gives us a good flavour of agentic applications. ... from dotenv import load_dotenv from langchain import hub from langchain.agents import ( AgentExecutor, create_react_agent, ) from langchain_core.tools import Tool from langchain_openai import ChatOpenAI # Load environment variables from .env file load_dotenv()
Aurelio
aurelio.ai โบ learn โบ langchain-agents-intro
Introduction to LangChain Agents | Aurelio AI
Our LLM/agent will read this and use it to decide when and how to use the tool. Clear parameter names that ideally tell the LLM what each parameter is. If the parameter names aren't clear, we ensure the docstring explains what the parameter is for and how to use it. Both parameter and return type annotations. ... from langchain_core.tools import tool @tool def add(x: float, y: float) -> float: """Add 'x' and 'y'.""" return x + y @tool def multiply(x: float, y: float) -> float: """Multiply 'x' and 'y'.""" return x * y @tool def exponentiate(x: float, y: float) -> float: """Raise 'x' to the power of 'y'.""" return x ** y @tool def subtract(x: float, y: float) -> float: """Subtract 'x' from 'y'.""" return y - x
Reddit
reddit.com โบ r/langchain โบ how to incorporate system message for agent.invoke() function
r/LangChain on Reddit: How to incorporate system message for agent.invoke() function
February 20, 2025 -
Hi folks, I have a question about agent invoke. In all the examples I have seen, agent.invoke(query) is used and query is just a question/message. How do I add a system message in this use case? Thanks.
Top answer 1 of 2
3
As long as you create your LLM using the chat interface (ChatOpenAI() instead of OpenAI()), you can choose to pass a message (list of dictionaries with role and content) instead of just a text query.
2 of 2
1
What is 'agent' in this instance? You can pass a standard chat array with roles and content to a chain.invoke, with system role and content at the top. If agent is a graph, then pass through the state.