Write a tool that call the script and you are good to go. Check out agents and custom tools Answer from BuzzLightr on reddit.com
🌐
LangChain
changelog.langchain.com › announcements › langchain-sandbox-run-untrusted-python-in-your-ai-agents
LangChain - Changelog | LangChain Sandbox: Run untrusted Python in
May 29, 2025 - LangChain Sandbox lets you safely run untrusted Python in your AI agents! Built on Pyodide (Python in WebAssembly), LangChain Sandbox lets you execute code...
🌐
Langchain
docs.langchain.com › oss › javascript › integrations › tools › pyinterpreter
Python interpreter integration - Docs by LangChain
This can be useful in combination with an LLM that can generate code to perform more powerful computations. import { OpenAI } from "@langchain/openai"; import { PythonInterpreterTool } from "@langchain/community/experimental/tools/pyinterpreter"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { StringOutputParser } from "@langchain/core/output_parsers"; const prompt = ChatPromptTemplate.fromTemplate( `Generate python code that does {input}. Do not generate anything else.` ); const model = new OpenAI({}); const interpreter = await PythonInterpreterTool.initialize({ indexUR
Discussions

python - How to run LangChains getting-started examples and getting output? - Stack Overflow
I'm going to learn LangChain and stumble upon their Getting Started section. Because it doesn't work and I'm curious if I am the only person where LangChain examples don't work. This is their tutor... More on stackoverflow.com
🌐 stackoverflow.com
python 3.x - load_tool does not recognize python_repl in langchain - Stack Overflow
You have access to a python REPL, which you can use to execute python code. If you get an error, debug your code and try again. Only use the output of your code to answer the question. You might know the answer without running any code, but you should still run the code to get the answer. If it does not seem like you can write code to answer the question, just return "I don't know" as the answer. """ base_prompt = hub.pull("langchain... More on stackoverflow.com
🌐 stackoverflow.com
How to deploy Langchain python script to production
Hot take: if you’ve never deployed a single python script into production before, you should start by learning basics of python and the classic libraries like flask, fastapi, etc before jumping into langchain More on reddit.com
🌐 r/LangChain
23
10
June 10, 2023
Best Practices for Python Coding and Tools?
Thats a really good question, and is one that all devs face at some point or another while writing a piece of code. Since you’re not providing example code i will assume you want to be closed source right now in whatever your creating, so Im going to answer off of this premise. The question if you should wrap it into langchain will be answered by answering, are you extending langchain functionality? If the answer is no or not quite, then no, don’t wrap your code in langchain, and here’s why. If you want to have a maintainable, scalable, and robust code then you’ll be forced to come back to whatever wrapper you created which isn’t part of langchain but you added to your langchain code. This will become an issue if you loose your code, you want to install the code in a new machine or if langchain updates their code. Instead is better to make your code extensible, as in you install langchain and then you call to an object and the call your code into the same project, and if you make it a package/library then you can call it and use it more dynamically. I understand if my answer is a little bit complex or abstract but maybe you can pass the code or what the code does to chatgpt and ask him the following: is my code pep8, pep20, pep257? is my code following DRY (Don’t Repeat Yourself) principle? is my code following SOLID principles? *These are single responsibility principle, open/closed principle, liskov substitution principle, interface segregation principle and dependency inversion principle. I think your question is more related to SRP. is my code following KISS (keep it simple stupid) principle? is my code following YAGNI (you aren’t gonna need it) principle? Code Reviews, use a rubber ducky along with chatgpt for code reviewing every so often if reviewing the code with someone else is out of the question. Test-Driven Development, try to add as much testing as possible to each method, function and class, once you add them, remove the ones that are not really needed. Finally make sure to make your code CI/CD. Ask chatgpt about all of these in case you have questions so you have a better understanding, dont just copy paste to chatgpt, for example for your specific question in this post I would ask chatgpt something in the prompts of: “Take a look at my code below, do you think it follows single responsibility principles, or do you think its time to divide my code into more parts and/or make it into a package/library? Additionally should i be wrapping it in a third-party library? CODE TO FOLLOW” More on reddit.com
🌐 r/LangChain
1
2
June 7, 2023
🌐
Reddit
reddit.com › r/langchain › can langchain run an existing script in python that i wrote and give me the result ?
r/LangChain on Reddit: Can langchain run an existing script in Python that I wrote and give me the result ?
August 7, 2023 - Else if you have a chain I’ll suggest calling a JSON reading function to the output of the conversation by prompting a answer template with JSON format(similar to autogpt)and make it a boolean value or make it a percentage based activation, theres a good example in the the multiple agent conversations in langchain examples repo(python version)where there is a presidential debate simulation and it posses a possibility percentage before each agent talks of the next best speaker according to the situation and history of the simulation.
🌐
GitHub
github.com › langchain-ai › langchain › discussions › 18438
Langgraph - Is that possible to execute the Python code and see the result? · langchain-ai/langchain · Discussion #18438
Additionally, LangChain does provide a built-in Python REPL tool for executing Python code and viewing the results. The provided code defines a class PythonREPL that simulates a standalone Python REPL environment.
Author   langchain-ai
🌐
LangChain
blog.langchain.com › code-interpreter-api
Code Interpreter API
July 16, 2023 - The installation is straightforward: get your OpenAI API Key here and install the package using pip. You can use the API in your Python code: start a session, generate a response based on user input - stop your session.
Find elsewhere
🌐
Medium
medium.com › data-science-in-your-pocket › building-code-execution-agents-in-generative-ai-using-langchain-04c933492213
Building Code Execution Agents in Generative AI using LangChain | by Mehul Gupta | Data Science in Your Pocket | Medium
October 3, 2024 - Sometimes, the LLM, even after notifying, will generate some suffix/prefix before the code like “```python”. Some logic is written to clean this up · Open a .py file and write the output code in that file.
🌐
YouTube
youtube.com › watch
LangChain Sandbox: Run Untrusted Python Safely for AI Agents - YouTube
🛡️ Introducing LangChain Sandbox: run untrusted Python safely in your AI agentsPowered by Pyodide (Python in WebAssembly) for secure code execution:🔒 Isola...
Published   May 21, 2025
🌐
LangSmith
smith.langchain.com › hub › langchain-ai › python-agent
langchain-ai/python-agent - LangSmith
Take agents from prototype to production. LangSmith gives you the tools to build, debug, evaluate, and ship reliable agents.
Top answer
1 of 2
2

I believe this code as printed in the book "Generative AI with LangChain" relies on and older version of langchain. langchain[docarray]==0.0.284 to be exact.

I suggest setting up a conda environment for the book as there seemed to be breaking changes.

If on the other hand you would want to use the newest LangChain version, this would work:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType, Tool
from langchain.utilities import PythonREPL
from langchain_experimental.tools import PythonREPLTool

agent = initialize_agent(
    tools=[PythonREPLTool()],
    llm=llm,
)

agent.invoke("what is 2 + 2?")ython_repl",
    description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
    func=python_repl.run,
)

agent = initialize_agent(
    tools=[python_repl],
    llm=llm,
)

agent.invoke("what is 2 + 2?")
2 of 2
0

If you check the source code of load_tools method, you can actually see that the allowed input names are defined inside the following dictionaries:

  • _LLM_TOOLS
  • _EXTRA_LLM_TOOLS
  • _EXTRA_OPTIONAL_TOOLS
  • DANGEROUS_TOOLS

It's correct that python_repl name is not recognized. However, you can actually use it as tool like the following:

import os

from langchain_openai import AzureChatOpenAI
from langchain_experimental.tools import PythonREPLTool
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent

llm = AzureChatOpenAI(
    openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
    azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"],
    model_version=os.environ["AZURE_OPENAI_MODEL_VERSION"]
)

tools = [PythonREPLTool()]

instructions = """You are an agent designed to write and execute python code to answer questions.
You have access to a python REPL, which you can use to execute python code.
If you get an error, debug your code and try again.
Only use the output of your code to answer the question. 
You might know the answer without running any code, but you should still run the code to get the answer.
If it does not seem like you can write code to answer the question, just return "I don't know" as the answer.
"""

base_prompt = hub.pull("langchain-ai/react-agent-template")
prompt = base_prompt.partial(instructions=instructions)

agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "What is the 10th fibonacci number?"})

Giving as output something like:

Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Python_REPL
Action Input:
```
def fibonacci(n):
    if n<=1:
        return n
    else:
        return fibonacci(n-1)+fibonacci(n-2)

print(fibonacci(9))
```Python REPL can execute arbitrary code. Use with caution.
34
Do I need to use a tool? No
Final Answer: The 10th fibonacci number is 55.

Finished chain.
🌐
LangChain
python.langchain.com › api_reference › _modules › langchain_experimental › tools › python › tool.html
langchain_experimental.tools.python.tool — 🦜🔗 LangChain documentation
"""A tool for running python code in a REPL.""" import ast import re import sys from contextlib import redirect_stdout from io import StringIO from typing import Any, Dict, Optional, Type from langchain_core.callbacks.manager import ( AsyncCallbackManagerForToolRun, CallbackManagerForToolRun, ) from langchain_core.runnables.config import run_in_executor from langchain_core.tools import BaseTool from pydantic import BaseModel, Field, model_validator from langchain_experimental.utilities.python import PythonREPL def _get_default_python_repl() -> PythonREPL: return PythonREPL(_globals=globals(), _locals=None)
🌐
Better Programming
betterprogramming.pub › building-a-custom-langchain-tool-for-generating-executing-code-fa20a3c89cfd
Build a Custom Langchain Tool for Generating and Executing Code | by Paolo Rechia | Better Programming
May 18, 2023 - This helps guide the LLM into actually defining functions and defining the dependencies. Let’s see another example, which I copied and pasted from one of my older langchain agents (hence the weird instructions). ... Your job is to plot an example chart using matplotlib. Create your own random data. Run this code only when you're finished.
🌐
GitHub
github.com › langchain-ai › langchain-sandbox
GitHub - langchain-ai/langchain-sandbox: Safely run untrusted Python code using Pyodide and Deno · GitHub
January 14, 2026 - It leverages Pyodide (Python compiled to WebAssembly) to run Python code in a sandboxed environment. 🔒 Security - Isolated execution environment with configurable permissions · 💻 Local Execution - No remote execution or Docker containers ...
Starred by 238 users
Forked by 27 users
Languages   Python 67.9% | TypeScript 28.2% | Makefile 3.9%
🌐
GitHub
github.com › langchain-ai › langchain › discussions › 22841
Extracting python code from PythonREPLTool · langchain-ai/langchain · Discussion #22841
June 13, 2024 - You have access to a python REPL, which you can use to execute python code. If you get an error, debug your code and try again. Only use the output of your code to answer the question.
Author   langchain-ai