I believe this code as printed in the book "Generative AI with LangChain" relies on and older version of langchain. langchain[docarray]==0.0.284 to be exact.

I suggest setting up a conda environment for the book as there seemed to be breaking changes.

If on the other hand you would want to use the newest LangChain version, this would work:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType, Tool
from langchain.utilities import PythonREPL
from langchain_experimental.tools import PythonREPLTool

agent = initialize_agent(
    tools=[PythonREPLTool()],
    llm=llm,
)

agent.invoke("what is 2 + 2?")ython_repl",
    description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
    func=python_repl.run,
)

agent = initialize_agent(
    tools=[python_repl],
    llm=llm,
)

agent.invoke("what is 2 + 2?")
Answer from AI-Guru on Stack Overflow
๐ŸŒ
Langchain
docs.langchain.com โ€บ oss โ€บ python โ€บ integrations โ€บ tools โ€บ python
Python REPL integration - Docs by LangChain
If you want to see the output of a value, you should print it out with `print(...)`. """ return python_repl.run(code)
Top answer
1 of 2
2

I believe this code as printed in the book "Generative AI with LangChain" relies on and older version of langchain. langchain[docarray]==0.0.284 to be exact.

I suggest setting up a conda environment for the book as there seemed to be breaking changes.

If on the other hand you would want to use the newest LangChain version, this would work:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType, Tool
from langchain.utilities import PythonREPL
from langchain_experimental.tools import PythonREPLTool

agent = initialize_agent(
    tools=[PythonREPLTool()],
    llm=llm,
)

agent.invoke("what is 2 + 2?")ython_repl",
    description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
    func=python_repl.run,
)

agent = initialize_agent(
    tools=[python_repl],
    llm=llm,
)

agent.invoke("what is 2 + 2?")
2 of 2
0

If you check the source code of load_tools method, you can actually see that the allowed input names are defined inside the following dictionaries:

  • _LLM_TOOLS
  • _EXTRA_LLM_TOOLS
  • _EXTRA_OPTIONAL_TOOLS
  • DANGEROUS_TOOLS

It's correct that python_repl name is not recognized. However, you can actually use it as tool like the following:

import os

from langchain_openai import AzureChatOpenAI
from langchain_experimental.tools import PythonREPLTool
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent

llm = AzureChatOpenAI(
    openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
    azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"],
    model_version=os.environ["AZURE_OPENAI_MODEL_VERSION"]
)

tools = [PythonREPLTool()]

instructions = """You are an agent designed to write and execute python code to answer questions.
You have access to a python REPL, which you can use to execute python code.
If you get an error, debug your code and try again.
Only use the output of your code to answer the question. 
You might know the answer without running any code, but you should still run the code to get the answer.
If it does not seem like you can write code to answer the question, just return "I don't know" as the answer.
"""

base_prompt = hub.pull("langchain-ai/react-agent-template")
prompt = base_prompt.partial(instructions=instructions)

agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "What is the 10th fibonacci number?"})

Giving as output something like:

Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Python_REPL
Action Input:
```
def fibonacci(n):
    if n<=1:
        return n
    else:
        return fibonacci(n-1)+fibonacci(n-2)

print(fibonacci(9))
```Python REPL can execute arbitrary code. Use with caution.
34
Do I need to use a tool? No
Final Answer: The 10th fibonacci number is 55.

Finished chain.
๐ŸŒ
LangChain
python.langchain.com โ€บ api_reference โ€บ experimental โ€บ tools โ€บ langchain_experimental.tools.python.tool.PythonREPLTool.html
PythonREPLTool โ€” ๐Ÿฆœ๐Ÿ”— LangChain documentation
from langchain_core.runnables import RunnableLambda async def reverse(s: str) -> str: return s[::-1] chain = RunnableLambda(func=reverse) events = [ event async for event in chain.astream_events("hello", version="v2") ] # will produce the following events (run_id, and parent_ids # has been omitted for brevity): [ { "data": {"input": "hello"}, "event": "on_chain_start", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"chunk": "olleh"}, "event": "on_chain_stream", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"output": "olleh"}, "event": "on_chain_end", "metadata": {}, "name": "reverse", "tags": [], }, ]
๐ŸŒ
GitHub
github.com โ€บ langchain-ai โ€บ langchain โ€บ issues โ€บ 10079
how to use PythonREPL tool to take dataframe and query ยท Issue #10079 ยท langchain-ai/langchain
September 1, 2023 - Issue you'd like to raise. I need to use Python REPL tool to take data frame and user query and answer based on the data frame. Suggestion: No response
Author ย  bmshambu
๐ŸŒ
GitHub
gist.github.com โ€บ wiseman โ€บ 4a706428eaabf4af1002a07a114f61d6
Langchain example: self-debugging ยท GitHub
to this : python_repl = Tool( "PythonREPL",... #no space ... Same here @kplr-io, it seems like the tool name string shouldn't contain spaces to function correctly. langchain = 0.1.13
Find elsewhere
๐ŸŒ
LangChain
api.python.langchain.com โ€บ en โ€บ latest โ€บ tools โ€บ langchain_experimental.tools.python.tool.PythonREPLTool.html
langchain_experimental.tools.python.tool.PythonREPLTool โ€” ๐Ÿฆœ๐Ÿ”— LangChain 0.2.17
class langchain_experimental.tools.python.tool.PythonREPLTool[source]ยถ ยท Bases: BaseTool ยท Tool for running python code in a REPL. Initialize the tool. param args_schema: Optional[TypeBaseModel] = Noneยถ ยท Pydantic model class to validate and parse the toolโ€™s input arguments.
๐ŸŒ
Hacker News
news.ycombinator.com โ€บ item
Enabling the 'terminal' and 'python-repl' tools in a langchain agent demonstrate... | Hacker News
March 28, 2023 - The link below is the transcript of a session in which I asked the agent to create a hello world script and executes it. The only input I provide is on line 17. Everything else is the langchain agent iteratively taking an action, observing the results and deciding the next action to take ยท ...
๐ŸŒ
GitHub
github.com โ€บ langchain-ai โ€บ langchain โ€บ discussions โ€บ 22841
Extracting python code from PythonREPLTool ยท langchain-ai/langchain ยท Discussion #22841
June 13, 2024 - You have access to a python REPL, which you can use to execute python code. If you get an error, debug your code and try again. Only use the output of your code to answer the question. You might know the answer without running any code, but you should still run the code to get the answer. If it does not seem like you can write code to answer the question, just return "I don't know" as the answer. """ base_prompt = hub.pull("langchain-ai/openai-functions-template") prompt = base_prompt.partial(instructions=instructions) agent = create_openai_functions_agent(ChatOpenAI(temperature=0), tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, metadata = ) response = agent_executor.invoke({"input": query}) return st.write(response) Based on the code above, i am using PythonREPLTool to generate a python code that will help answer the user query.
Author ย  langchain-ai
Top answer
1 of 2
1

If you get the agent_executor to list all the packages that are installed in the remote environment:

agent_executor.run("list all the installed packages in the environment we're running in.")

Result:

Please wait a moment while I gather a list of all available modules...

                    antigravity         imaplib             seaborn
PIL                 anyio               imghdr              secrets
__future__          argparse            imp                 select
__hello__           array               importlib           selectors
__phello__          array_api_compat    inspect             session_info
_abc                ast                 io                  setuptools
_aix_support        asynchat            ipaddress           shelve
_ast                asyncio             itertools           shlex
_asyncio            asyncio_new         joblib              shutil
_bisect             asyncore            json                signal
_blake2             atexit              jsonpatch           site
_bootsubprocess     attr                jsonpointer         sitecustomize
_bz2                attrs               junk                six
_codecs             audioop             keyword             sklearn
_codecs_cn          backend_interagg    kiwisolver          smtpd
_codecs_hk          base64              langchain           smtplib
_codecs_iso2022     bdb                 langchain_experimental sndhdr
_codecs_jp          binascii            langsmith           sniffio
_codecs_kr          bisect              lib2to3             socket
_codecs_tw          bs4                 linecache           socketserver
_collections        builtins            llvmlite            soupsieve
_collections_abc    bz2                 locale              sqlalchemy
_compat_pickle      cProfile            logging             sqlite3
_compression        calendar            lxml                sre_compile
_contextvars        certifi             lzma                sre_constants
_csv                cgi                 mailbox             sre_parse
_ctypes             cgitb               mailcap             ssl
_ctypes_test        charset_normalizer  marshal             stat
_datetime           chunk               marshmallow         statistics
_decimal            cmath               math                statsmodels
_distutils_hack     cmd                 matplotlib          stdlib_list
_elementtree        code                memoisation_demo    string
_functools          codecs              mimetypes           stringprep
_hashlib            codeop              mmap                struct
_heapq              collections         modulefinder        subprocess
_imp                colorama            mpl_toolkits        sunau
_io                 colorsys            msilib              symtable
_json               compileall          msvcrt              sys
_locale             concurrent          multidict           sysconfig
_lsprof             configparser        multiprocessing     tabnanny
_lzma               contextlib          mypy                tarfile
_markupbase         contextvars         mypy_extensions     telnetlib
_md5                contourpy           mypyc               tempfile
_msi                copy                natsort             tenacity
_multibytecodec     copyreg             netrc               test
_multiprocessing    crypt               networkx            textwrap
_opcode             csv                 nntplib             this
_operator           ctypes              nt                  threading
_osx_support        curses              ntpath              threadpoolctl
_overlapped         cycler              nturl2path          time
_pickle             dataclasses         numba               timeit
_py_abc             dataclasses_json    numbers             tkinter
_pydecimal          datalore            numpy               token
_pyio               datetime            opcode              tokenize
_queue              dateutil            openai              tomllib
_random             dbm                 operator            tqdm
_sha1               decimal             optparse            trace
_sha256             difflib             os                  traceback
_sha3               dis                 packaging           tracemalloc
_sha512             distro              pandas              tty
_signal             distutils           pathlib             turtle
_sitebuiltins       doctest             patsy               turtledemo
_socket             dotenv              pdb                 types
_sqlite3            email               pickle              typing
_sre                encodings           pickletools         typing_extensions
_ssl                ensurepip           pip                 typing_inspect
_stat               enum                pipes               tzdata
_statistics         errno               pkg_resources       umap
_string             extendableenum      pkgutil             unicodedata
_strptime           faulthandler        platform            unittest
_struct             filecmp             plistlib            urllib
_symtable           fileinput           poplib              urllib3
_testbuffer         fnmatch             posixpath           uu
_testcapi           fontTools           pprint              uuid
_testconsole        fractions           profile             venv
_testimportmultiple frozenlist          pstats              warnings
_testinternalcapi   ftplib              pty                 wave
_testmultiphase     functools           py_compile          weakref
_thread             gc                  pyclbr              webbrowser
_threading_local    genericpath         pydantic            wheel
_tkinter            getopt              pydantic_core       winreg
_tokenize           getpass             pydoc               winsound
_tracemalloc        gettext             pydoc_data          working_thread
_typing             glob                pyexpat             wsgiref
_uuid               graphlib            pylab               xdrlib
_virtualenv         graphviz            pynndescent         xml
_warnings           greenlet            pyparsing           xml_stuff
_weakref            gzip                pytz                xmlrpc
_weakrefset         h11                 queue               xxsubtype
_winapi             h5py                quopri              yaml
_xxsubinterpreters  hashlib             random              yarl
_yaml               heapq               re                  zipapp
_zoneinfo           hmac                reprlib             zipfile
abc                 html                requests            zipimport
aifc                http                rlcompleter         zlib
aiohttp             httpcore            runpy               zoneinfo
aiosignal           httpx               scanpy              
anndata             idlelib             sched               
annotated_types     idna                scipy               

Enter any module name to get more help.  Or, type "modules spam" to search
for modules whose name or summary contain the string "spam".

It appears that scanpy is among them, so you should be able to use it.

agent_executor.run("import the scanpy package and list available functions and classes.")

Result:

Help on package scanpy:

NAME
    scanpy - Single-Cell Analysis in Python.

PACKAGE CONTENTS
    __main__
    _compat
    _settings
    _utils (package)
    _version
    cli
    datasets (package)
    experimental (package)
    external (package)
    get (package)
    logging
    metrics (package)
    neighbors (package)
    plotting (package)
    preprocessing (package)
    queries (package)
    readwrite
    sim_models (package)
    testing (package)
    tools (package)

SUBMODULES
    pl
    pp
    tl

DATA
    settings = <scanpy._settings.ScanpyConfig object>

VERSION
    1.9.6

FILE
    c:\dev\env\sandbox_311\lib\site-packages\scanpy\__init__.py

Since I got the same error you did initially, I tried this:

agent_executor.run("first import 'scanpy' and list its contents, then use 'scanpy' to plot a umap using the pbmc dataset")

And that actually works. AI is a bit dumb, so have to hold its hand like you would a slightly under-achieving graduate with a lot of potential.

However, this just gets you to the next roadblock: to generate the UMAP, it has to / wants to use igraph and that's actually not available in the environment. So, you'll have to figure out what available libraries could be used for this. Or perhaps tell the AI to do that for you as well...

2 of 2
0

You can tell gpt to use tools. Or it won't use it. In the environment of gpt, it doesnt have this lib.

Just like this:

agent_executor = initialize_agent(
    tools=[PythonREPLTool()],
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
    memory=memory,
    set_debug=True,
)

st = agent_executor.invoke({"input": "I need to use Scanpy to read some mock data and provide me with the results. Please execute this using the Python REPL tool."})

The whole func will excute in your own computer but not openai.

๐ŸŒ
Kaggle
kaggle.com โ€บ code โ€บ ksmooi โ€บ langchain-data-analysis-with-repl-tool-and-llm
LangChain: Data Analysis with REPL-Tool and LLM
Checking your browser before accessing www.kaggle.com ยท Click here if you are not automatically redirected after 5 seconds
๐ŸŒ
GitHub
github.com โ€บ langchain-ai โ€บ langchain โ€บ pull โ€บ 23678
Added example usage for Python REPL tool by Robert-Jia00129 ยท Pull Request #23678 ยท langchain-ai/langchain
June 30, 2024 - agree with eugene re focusing on the tool itself. We've also come up with a standard tool integration format, could we follow that here? here's a good example of it: https://python.langchain.com/v0.2/docs/integrations/tools/tavily_search/
Author ย  langchain-ai