Here is how you can do it. Change the content in PREFIX, SUFFIX, and FORMAT_INSTRUCTION according to your need after tying and testing few times.

import os                                                                       
from langchain.agents import load_tools                                         
from langchain.agents import initialize_agent                                   
from langchain.llms import OpenAI                                               
from langchain.prompts import PromptTemplate 

llm = OpenAI(model_name='text-davinci-003', temperature = 0.7, openai_api_key = "<my openAPI key> ")
tools = load_tools(['serpapi'], llm=llm)

PREFIX = '''You are an AI data scientist. You have done years of research in studying all the AI algorthims. You also love to write. On free time you write blog post for articulating what you have learned about different AI algorithms. Do not forget to include information on the algorithm's benefits, disadvantages, and applications. Additionally, the blog post should explain how the algorithm advances model reasoning by a whopping 70% and how it is a plug in and play version, connecting seamlessly to other components.
'''

FORMAT_INSTRUCTIONS = """To use a tool, please use the following format:
'''
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
'''

When you have gathered all the information regarding AI algorithm, just write it to the user in the form of a blog post.

'''
Thought: Do I need to use a tool? No
AI: [write a blog post]
'''
"""

SUFFIX = '''

Begin!

Previous conversation history:
{chat_history}

Instructions: {input}
{agent_scratchpad}
'''

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent="zero-shot-react-description",
    verbose=True,
    return_intermediate_steps=True,
    agent_kwargs={
        'prefix': PREFIX, 
        'format_instructions': FORMAT_INSTRUCTIONS,
        'suffix': SUFFIX
    }
)

res = agent({"input": query.strip()})
print(res['output'])
Answer from Meet Gondaliya on Stack Overflow
🌐
Langchain
docs.langchain.com › oss › python › langchain › agents
Agents - Docs by LangChain
March 29, 2026 - When no system_prompt is provided, the agent will infer its task from the messages directly. The system_prompt parameter accepts either a str or a SystemMessage. Using a SystemMessage gives you more control over the prompt structure, which is useful for provider-specific features like Anthropic’s prompt caching: from langchain.agents import create_agent from langchain.messages import SystemMessage, HumanMessage literary_agent = create_agent( model="google_genai:gemini-3.1-pro-preview", system_prompt=SystemMessage( content=[ { "type": "text", "text": "You are an AI assistant tasked with analyzing literary works.", }, { "type": "text", "text": "<the entire contents of 'Pride and Prejudice'>", "cache_control": {"type": "ephemeral"} } ] ) ) result = literary_agent.invoke( {"messages": [HumanMessage("Analyze the major themes in 'Pride and Prejudice'.")]} )
Top answer
1 of 2
10

Here is how you can do it. Change the content in PREFIX, SUFFIX, and FORMAT_INSTRUCTION according to your need after tying and testing few times.

import os                                                                       
from langchain.agents import load_tools                                         
from langchain.agents import initialize_agent                                   
from langchain.llms import OpenAI                                               
from langchain.prompts import PromptTemplate 

llm = OpenAI(model_name='text-davinci-003', temperature = 0.7, openai_api_key = "<my openAPI key> ")
tools = load_tools(['serpapi'], llm=llm)

PREFIX = '''You are an AI data scientist. You have done years of research in studying all the AI algorthims. You also love to write. On free time you write blog post for articulating what you have learned about different AI algorithms. Do not forget to include information on the algorithm's benefits, disadvantages, and applications. Additionally, the blog post should explain how the algorithm advances model reasoning by a whopping 70% and how it is a plug in and play version, connecting seamlessly to other components.
'''

FORMAT_INSTRUCTIONS = """To use a tool, please use the following format:
'''
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
'''

When you have gathered all the information regarding AI algorithm, just write it to the user in the form of a blog post.

'''
Thought: Do I need to use a tool? No
AI: [write a blog post]
'''
"""

SUFFIX = '''

Begin!

Previous conversation history:
{chat_history}

Instructions: {input}
{agent_scratchpad}
'''

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent="zero-shot-react-description",
    verbose=True,
    return_intermediate_steps=True,
    agent_kwargs={
        'prefix': PREFIX, 
        'format_instructions': FORMAT_INSTRUCTIONS,
        'suffix': SUFFIX
    }
)

res = agent({"input": query.strip()})
print(res['output'])
2 of 2
0

your close but your not actually filling in the prompt template

foo = agent.run(prompt.format_prompt(topic=topic)).

that should have you away to the races. for what its worth I think this should be an error rather than the PromptTemplate quietly rendering as a string

Discussions

How do I customize the prompt for the zero shot agent ?
I create an agent using: · I now want to customize the content of the default prompt used by the agent. I wasn't able to locate any documented input parameters to initialize_agent() to do so. Is there a way to accomplish this More on github.com
🌐 github.com
14
May 3, 2023
Is it possible to have a conversational agent with a prompt/system message?
Yeah you can. I’m working in python, but I figure JS would be similar. Look for SystemMessage (in python it’s in langchain.schema module), and use it to create a System Message (this is what chat models use to give context to the LLM. Then you add it to the agent’s initialization method (in python it goes into agent_kwargs, but kwargs is a python thing). So, look into the SystemMessage schema. More on reddit.com
🌐 r/LangChain
20
12
July 3, 2023
python - Issue with Custom Prompt with an agent Using LangChain and GPT-4 - Stack Overflow
I'm working on a project using LangChain to create an agent that can answer questions based on some pandas DataFrames. I'm using a GPT-4 model for this. I'm running into an issue where I'm trying t... More on stackoverflow.com
🌐 stackoverflow.com
Is there a way to print out the full prompt that the chain is sending to OpenAI API?
import langchain langchain.debug = True Also use callbacks to get everything, for example. More on reddit.com
🌐 r/LangChain
10
5
August 29, 2023
🌐
Medium
medium.com › @sagaruprety › langchain-agents-prompt-design-for-efficient-tool-selection-fa168398a6c8
Langchain Agents: Prompt design for Tools Control Flow | by Sagar Uprety | Medium
October 2, 2023 - In particular we discuss how to make agents use the tools in a particular order, whenever they decide to use them. We start with installing and importing relevant libraries and setting up the openAI and SERP API keys. openAI API keys can be obtained from https://platform.openai.com/account/api-keys and SERP API keys can be obtained after registering at https://serpapi.com/. You can use an open source model like Llama2 instead. Langchain serves the functionality to load it via llama-cpp-python library.
🌐
GitHub
github.com › hwchase17 › langchain › issues › 4044
How do I customize the prompt for the zero shot agent ? · Issue #4044 · langchain-ai/langchain
May 3, 2023 - zero_shot_agent = initialize_agent( agent="zero-shot-react-description", tools=tools, llm=llm, verbose=True, max_iterations=3 ) I now want to customize the content of the default prompt used by the agent. I wasn't able to locate any documented input parameters to initialize_agent() to do so.
Author   jiyer2016
🌐
Reddit
reddit.com › r/langchain › is it possible to have a conversational agent with a prompt/system message?
r/LangChain on Reddit: Is it possible to have a conversational agent with a prompt/system message?
July 3, 2023 -

Been searching for this the last couple days and figured I'd raise the white flag and ask for help.

I'm working on a conversational agent (with buffer memory), and want to be able to add a prompt or system message to give it a persona + some context.

As far as I've been able to see, it doesn't seem like I can do this out of the box?

Last night I found a few people that were adding {chat_history} as an input variable in a prefix/suffix and then handling memory themselves. I don't hate that, because if I'm going to release this to prod then I probably am going to have to change up how I do memory anyways, but I figured I would see if anyone else has come across this problem and what solutions exist. TIA.

*Note - I'm using the JS package. If the answer is just switching to the python package, that's great. I prefer JS, but happy to switch if that's needed.

Edit: for anyone who finds this down the road, here's what the solution looks like using the Conversational Agent:

const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "chat-conversational-react-description",
verbose: true,
agentArgs: {
systemMessage:
"You are a pirate. Speak like a pirate with lots of 'Arhgs'",
},
});

Find elsewhere
🌐
LangSmith
smith.langchain.com › hub › langchain-ai › sql-agent-system-prompt
langchain-ai/sql-agent-system-prompt - LangSmith
Take agents from prototype to production. LangSmith gives you the tools to build, debug, evaluate, and ship reliable agents.
🌐
LangChain
blog.langchain.com › introducing-agent-builder-template-library
Deploy agents instantly with Agent Builder templates
January 21, 2026 - The LangChain Team · January 21, 2026 · 3 · min · Go back to blog · Create agents · Share · LangSmith Agent Builder allows anyone to build an agent with a simple prompt. Ask it to build you a market research agent, and it will follow ...
🌐
Latenode
latenode.com › home › blog › ai frameworks & technical infrastructure › langchain (setup, tools, agents, memory) › langchain prompt templates: complete guide with examples
LangChain Prompt Templates: Complete Guide with Examples - Latenode Blog
February 12, 2026 - The flexibility of LangChain's templates supports various use cases, from single-message tasks to multi-turn chatbot interactions. Developers can also integrate conversation histories or use few-shot prompting to guide AI with examples, making it suitable for complex tasks like customer support or technical troubleshooting.
🌐
SurePrompts
sureprompts.com › home › blog › langgraph prompting guide: how to build stateful multi-agent llm apps (2026)
LangGraph Prompting Guide: How to Build Stateful Multi-Agent LLM Apps (2026) | SurePrompts
3 days ago - How to prompt LangGraph — state design, node prompts, conditional routing, human-in-the-loop, persistence, and the failure modes specific to graph-based agents. ... LangGraph is LangChain's graph-based runtime for stateful, multi-actor LLM apps. You define a typed state, register nodes that ...
🌐
LangChain
langchain.com › langgraph
LangGraph: Agent Orchestration Framework for Reliable AI Agents
Bridge user expectations and agent capabilities with native token-by-token streaming, showing agent reasoning and actions in real time. ... Learn the basics of LangGraph in this LangChain Academy Course.