🌐
GitHub
github.com › langchain-ai › react-agent
GitHub - langchain-ai/react-agent: LangGraph template for a simple ReAct agent · GitHub
Open the folder LangGraph Studio! Add new tools: Extend the agent's capabilities by adding new tools in tools.py. These can be any Python functions that perform specific tasks. Select a different model: We default to Anthropic's Claude 3 Sonnet.
Starred by 730 users
Forked by 687 users
Languages   Python 84.4% | Makefile 15.6%
🌐
GitHub
github.com › wassim249 › fastapi-langgraph-agent-production-ready-template
GitHub - wassim249/fastapi-langgraph-agent-production-ready-template: A production-ready FastAPI template for building AI agent applications with LangGraph integration. This template provides a robust foundation for building scalable, secure, and maintainable AI agent services. · GitHub
3 weeks ago - app/ api/v1/ # Route handlers core/ langgraph/ # Agent graph + tools prompts/ # System prompt template cache.py # Valkey/Redis + in-memory fallback config.py # Settings middleware.py # Metrics, logging context, profiling limiter.py # Rate limiting models/ # SQLModel ORM models schemas/ # Pydantic request/response schemas services/ # LLM, database, memory services alembic/ # Database migrations evals/ # LLM evaluation framework
Starred by 2.2K users
Forked by 526 users
Languages   Python 87.6% | Shell 8.0% | Makefile 3.4%
Discussions

[Project] 10+ prompt iterations to make my LangGraph agent follow ONE rule consistently
Links and Installation: GitHub repository (with complete working example): https://github.com/datagusto/agent-control-layer Install: pip install agent-control-layer More on reddit.com
🌐 r/LangChain
1
9
July 3, 2025
Getting messages from within a tool in LangGraph
This is the pattern you should be following: https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/#editing In a nutshell: you stop the graph when user input is required, add that user input to the state and resume running the graph where it left off. More on reddit.com
🌐 r/LangChain
11
3
October 25, 2024
Any good prompt management & versioning tools out there, that integrate nicely?
https://github.com/pezzolabs/pezzo What do you think about this one? More on reddit.com
🌐 r/LangChain
70
60
January 4, 2024
Mult-React-agents workflow using Langgraph
Fastest thing to do would be to build your own implementation that doesn't call `.compile()` I think - here's an example: https://langchain-ai.github.io/langgraph/#example More on reddit.com
🌐 r/LangChain
5
2
July 22, 2024
🌐
GitHub
github.com › langchain-ai › data-enrichment
GitHub - langchain-ai/data-enrichment: LangGraph Studio template for creating an agent that does web research to genearte or enrich structured data.
Open the folder LangGraph Studio, and input topic and extraction_schema. Customize research targets: Provide a custom JSON extraction_schema when calling the graph to gather different types of information.
Starred by 218 users
Forked by 57 users
Languages   Jupyter Notebook 96.7% | Python 3.0% | Makefile 0.3% | Jupyter Notebook 96.7% | Python 3.0% | Makefile 0.3%
🌐
LangChain
blog.langchain.com › launching-langgraph-templates
Launching LangGraph Templates
September 19, 2024 - These template repositories address common use cases and are designed for easy configuration and deployment to LangGraph Cloud. The best way to use these is to download the newest version of LangGraph Studio, but you can also use them as standalone GitHub repos.
🌐
GitHub
github.com › langchain-ai › new-langgraph-project
GitHub - langchain-ai/new-langgraph-project · GitHub
For more information on getting started with LangGraph Server, see here. Define runtime context: Modify the Context class in the graph.py file to expose the arguments you want to configure per assistant. For example, in a chatbot application you may want to define a dynamic system prompt or LLM to use.
Starred by 249 users
Forked by 501 users
Languages   Python 51.6% | Makefile 48.4%
🌐
Langchain
docs.langchain.com › oss › python › langchain › overview
LangChain overview - Docs by LangChain
# pip install -qU langchain "langchain[openai]" from langchain.agents import create_agent def get_weather(city: str) -> str: """Get weather for a given city.""" return f"It's always sunny in {city}!" agent = create_agent( model="openai:gpt-5.4", tools=[get_weather], system_prompt="You are a helpful assistant", ) result = agent.invoke( {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]} ) print(result["messages"][-1].content_blocks)
🌐
Medium
becomingahacker.org › mastering-prompt-engineering-for-langchain-langgraph-and-ai-agent-applications-e26d85a55f13
Mastering Prompt Engineering for LangChain, LangGraph, and AI Agent Applications | by Omar Santos | Medium
June 15, 2025 - Imagine an incident response workflow ... alert. langgraph's conditional edges make this straightforward. The following is a conceptual example of a graph for triaging alerts. 🧑🏻‍💻NOTE: This example is available at this GitHub repository. # Branching Conditional Logic # Branching conditional logic allows you to include conditional logic in a prompt template...
🌐
GitHub
github.com › langchain-ai › retrieval-agent-template
GitHub - langchain-ai/retrieval-agent-template · GitHub
Customize the response generation: You can modify the response_system_prompt to change how the agent formulates its responses.
Starred by 159 users
Forked by 49 users
Languages   Python 93.6% | Makefile 6.4%
Find elsewhere
🌐
GitHub
github.com › lpetralli › LangGraph-easy-template
GitHub - lpetralli/LangGraph-easy-template
GROQ_API_KEY=tu_clave_api_de_groq OPENAI_API_KEY=tu_clave_api_de_openai LANGSMITH_TRACING=true LANGSMITH_API_KEY=tu_clave_api_de_langsmith LANGSMITH_PROJECT="LangGraph-easy-template" Crear la vector store local: Utiliza el notebook test-RAG.ipynb para generar la base de conocimientos local. Modificar el prompt (opcional): Si lo deseas, puedes modificar el prompt en el archivo agent.py.
Author   lpetralli
🌐
GitHub
gist.github.com › shahshrey › 6705a3c4077ec243ae379caa0463530e
LangGraph Agent Generator Meta-Prompt - A comprehensive prompt for generating React-style and Workflow-style LangGraph agents with examples, templates, and setup instructions. · GitHub
LangGraph Agent Generator Meta-Prompt - A comprehensive prompt for generating React-style and Workflow-style LangGraph agents with examples, templates, and setup instructions. - prompt.py
🌐
GitHub
github.com › von-development › awesome-LangGraph
GitHub - von-development/awesome-LangGraph: An index of the LangChain + LangGraph ecosystem: concepts, projects, tools, templates, and guides for LLM & multi-agent apps. · GitHub
Add logging, retries, guardrails, human approval, rate limiting, prompt transforms, and other execution-time behavior. ... Run agent-generated code in isolated execution environments. Sandboxes provide safer boundaries for shell access, filesystem operations, and code execution. ... Persistence backends for LangGraph state.
Starred by 1.8K users
Forked by 195 users
Languages   JavaScript
🌐
GitHub
github.com › ryaneggz › langgraph-template
GitHub - enso-labs/orchestra: 🪶 AI Agent Orchestrator built on LangGraph powered by MCP & A2A
January 3, 2025 - # Change directory cd <project-root>/backend # Generate virtualenv uv venv # Activate source .venv/bin/activate # Install uv pip install -r requirements.txt -r requirements-dev.txt # Run bash scripts/dev.sh # Select "no" when prompted.
Starred by 6 users
Forked by 2 users
Languages   TypeScript 62.2% | Python 34.3% | Shell 1.4% | HCL 1.0% | CSS 0.5% | JavaScript 0.3% | TypeScript 62.2% | Python 34.3% | Shell 1.4% | HCL 1.0% | CSS 0.5% | JavaScript 0.3%
🌐
Reddit
reddit.com › r/langchain › [project] 10+ prompt iterations to make my langgraph agent follow one rule consistently
r/LangChain on Reddit: [Project] 10+ prompt iterations to make my LangGraph agent follow ONE rule consistently
July 3, 2025 -

Hey r/LangChain,

The problem with LangGraph agents in production

After 10+ prompt iterations, my LangGraph agent still behaves differently every time for the same task.

Ever experienced this with LangGraph agents?

  • Your agent calls a tool through LangGraph, but it doesn't work as expected: gets fewer results than needed, returns irrelevant items

  • Back to system prompt tweaking: "If the search returns less than three results, then...," "You MUST review all results that are relevant to the user's instruction," etc.

  • However, a slight change in one instruction can break logic for other scenarios. Endless prompt tweaking cycle.

  • LangGraph's routing works great for predetermined paths, but struggles when you need reactions based on actual tool output content

  • As a result, custom logic spreads everywhere in prompts and custom tools. No one knows where specific scenario logic lives.

Couldn't ship to production because behavior was unpredictable - same inputs, different outputs every time. Traditional LangGraph approaches like prompt tweaking and custom tool wrappers felt wrong.

What I built instead: Agent Control Layer

I created a library that eliminates prompt tweaking hell and makes LangGraph agent behavior predictable.

Here's how simple it is: Define a rule:

target_tool_name: "web_search"
trigger_pattern: "len(tool_output) < 3"
instruction: "Try different search terms - we need more results to work with"

Then, literally just add one line to your LangGraph agent:

# LangGraph agent
from agent_control_layer.langgraph import build_control_layer_tools
# Add Agent Control Layer tools to your existing toolset
TOOLS = TOOLS + build_control_layer_tools(State)

That's it. No more prompt tweaking, consistent behavior every time.

The real benefits

Here's what actually changes:

  • Centralized logic: No more hunting through LangGraph prompts and custom tools to find where specific behaviors are defined

  • Version control friendly: YAML rules can be tracked, reviewed, and rolled back like any other code

  • Non-developer friendly: Team members can understand and modify agent behavior without touching LangGraph code

  • Audit trail: Clear logging of which rules fired and when, making LangGraph agent debugging much easier

Your thoughts?

What's your current approach to inconsistent LangGraph agent behavior?

Agent Control Layer vs prompt tweaking - which team are you on?

What's coming next

I'm working on a few updates based on early feedback:

  1. Performance benchmarks - Publishing detailed reports on how the library affects LangGraph agent accuracy, latency, and token consumption

  2. Natural language rules - Adding support for LLM-as-a-judge style evaluation, so you can write rules like "if the results don't seem relevant to the user's question" instead of strict Python conditions

  3. Auto-rule generation - Eventually, just tell the agent "hey, handle this scenario better" and it automatically creates the appropriate rule for you

What am I missing? Would love to hear your perspective on this approach.

🌐
GitHub
github.com › panaversity › langgraph-agents-template
GitHub - panaversity/langgraph-agents-template: Starter template to build multi agent systems
Starter template to build multi agent systems. Contribute to panaversity/langgraph-agents-template development by creating an account on GitHub.
Starred by 97 users
Forked by 21 users
Languages   Python 85.8% | Makefile 12.3% | Dockerfile 1.9% | Python 85.8% | Makefile 12.3% | Dockerfile 1.9%