I believe this code as printed in the book "Generative AI with LangChain" relies on and older version of langchain. langchain[docarray]==0.0.284 to be exact.
I suggest setting up a conda environment for the book as there seemed to be breaking changes.
If on the other hand you would want to use the newest LangChain version, this would work:
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType, Tool
from langchain.utilities import PythonREPL
from langchain_experimental.tools import PythonREPLTool
agent = initialize_agent(
tools=[PythonREPLTool()],
llm=llm,
)
agent.invoke("what is 2 + 2?")ython_repl",
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
func=python_repl.run,
)
agent = initialize_agent(
tools=[python_repl],
llm=llm,
)
agent.invoke("what is 2 + 2?")
Answer from AI-Guru on Stack OverflowI believe this code as printed in the book "Generative AI with LangChain" relies on and older version of langchain. langchain[docarray]==0.0.284 to be exact.
I suggest setting up a conda environment for the book as there seemed to be breaking changes.
If on the other hand you would want to use the newest LangChain version, this would work:
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType, Tool
from langchain.utilities import PythonREPL
from langchain_experimental.tools import PythonREPLTool
agent = initialize_agent(
tools=[PythonREPLTool()],
llm=llm,
)
agent.invoke("what is 2 + 2?")ython_repl",
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
func=python_repl.run,
)
agent = initialize_agent(
tools=[python_repl],
llm=llm,
)
agent.invoke("what is 2 + 2?")
If you check the source code of load_tools method, you can actually see that the allowed input names are defined inside the following dictionaries:
- _LLM_TOOLS
- _EXTRA_LLM_TOOLS
- _EXTRA_OPTIONAL_TOOLS
- DANGEROUS_TOOLS
It's correct that python_repl name is not recognized. However, you can actually use it as tool like the following:
import os
from langchain_openai import AzureChatOpenAI
from langchain_experimental.tools import PythonREPLTool
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent
llm = AzureChatOpenAI(
openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"],
model_version=os.environ["AZURE_OPENAI_MODEL_VERSION"]
)
tools = [PythonREPLTool()]
instructions = """You are an agent designed to write and execute python code to answer questions.
You have access to a python REPL, which you can use to execute python code.
If you get an error, debug your code and try again.
Only use the output of your code to answer the question.
You might know the answer without running any code, but you should still run the code to get the answer.
If it does not seem like you can write code to answer the question, just return "I don't know" as the answer.
"""
base_prompt = hub.pull("langchain-ai/react-agent-template")
prompt = base_prompt.partial(instructions=instructions)
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "What is the 10th fibonacci number?"})
Giving as output something like:
Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Python_REPL
Action Input:
```
def fibonacci(n):
if n<=1:
return n
else:
return fibonacci(n-1)+fibonacci(n-2)
print(fibonacci(9))
```Python REPL can execute arbitrary code. Use with caution.
34
Do I need to use a tool? No
Final Answer: The 10th fibonacci number is 55.
Finished chain.
Videos
If you get the agent_executor to list all the packages that are installed in the remote environment:
agent_executor.run("list all the installed packages in the environment we're running in.")
Result:
Please wait a moment while I gather a list of all available modules...
antigravity imaplib seaborn
PIL anyio imghdr secrets
__future__ argparse imp select
__hello__ array importlib selectors
__phello__ array_api_compat inspect session_info
_abc ast io setuptools
_aix_support asynchat ipaddress shelve
_ast asyncio itertools shlex
_asyncio asyncio_new joblib shutil
_bisect asyncore json signal
_blake2 atexit jsonpatch site
_bootsubprocess attr jsonpointer sitecustomize
_bz2 attrs junk six
_codecs audioop keyword sklearn
_codecs_cn backend_interagg kiwisolver smtpd
_codecs_hk base64 langchain smtplib
_codecs_iso2022 bdb langchain_experimental sndhdr
_codecs_jp binascii langsmith sniffio
_codecs_kr bisect lib2to3 socket
_codecs_tw bs4 linecache socketserver
_collections builtins llvmlite soupsieve
_collections_abc bz2 locale sqlalchemy
_compat_pickle cProfile logging sqlite3
_compression calendar lxml sre_compile
_contextvars certifi lzma sre_constants
_csv cgi mailbox sre_parse
_ctypes cgitb mailcap ssl
_ctypes_test charset_normalizer marshal stat
_datetime chunk marshmallow statistics
_decimal cmath math statsmodels
_distutils_hack cmd matplotlib stdlib_list
_elementtree code memoisation_demo string
_functools codecs mimetypes stringprep
_hashlib codeop mmap struct
_heapq collections modulefinder subprocess
_imp colorama mpl_toolkits sunau
_io colorsys msilib symtable
_json compileall msvcrt sys
_locale concurrent multidict sysconfig
_lsprof configparser multiprocessing tabnanny
_lzma contextlib mypy tarfile
_markupbase contextvars mypy_extensions telnetlib
_md5 contourpy mypyc tempfile
_msi copy natsort tenacity
_multibytecodec copyreg netrc test
_multiprocessing crypt networkx textwrap
_opcode csv nntplib this
_operator ctypes nt threading
_osx_support curses ntpath threadpoolctl
_overlapped cycler nturl2path time
_pickle dataclasses numba timeit
_py_abc dataclasses_json numbers tkinter
_pydecimal datalore numpy token
_pyio datetime opcode tokenize
_queue dateutil openai tomllib
_random dbm operator tqdm
_sha1 decimal optparse trace
_sha256 difflib os traceback
_sha3 dis packaging tracemalloc
_sha512 distro pandas tty
_signal distutils pathlib turtle
_sitebuiltins doctest patsy turtledemo
_socket dotenv pdb types
_sqlite3 email pickle typing
_sre encodings pickletools typing_extensions
_ssl ensurepip pip typing_inspect
_stat enum pipes tzdata
_statistics errno pkg_resources umap
_string extendableenum pkgutil unicodedata
_strptime faulthandler platform unittest
_struct filecmp plistlib urllib
_symtable fileinput poplib urllib3
_testbuffer fnmatch posixpath uu
_testcapi fontTools pprint uuid
_testconsole fractions profile venv
_testimportmultiple frozenlist pstats warnings
_testinternalcapi ftplib pty wave
_testmultiphase functools py_compile weakref
_thread gc pyclbr webbrowser
_threading_local genericpath pydantic wheel
_tkinter getopt pydantic_core winreg
_tokenize getpass pydoc winsound
_tracemalloc gettext pydoc_data working_thread
_typing glob pyexpat wsgiref
_uuid graphlib pylab xdrlib
_virtualenv graphviz pynndescent xml
_warnings greenlet pyparsing xml_stuff
_weakref gzip pytz xmlrpc
_weakrefset h11 queue xxsubtype
_winapi h5py quopri yaml
_xxsubinterpreters hashlib random yarl
_yaml heapq re zipapp
_zoneinfo hmac reprlib zipfile
abc html requests zipimport
aifc http rlcompleter zlib
aiohttp httpcore runpy zoneinfo
aiosignal httpx scanpy
anndata idlelib sched
annotated_types idna scipy
Enter any module name to get more help. Or, type "modules spam" to search
for modules whose name or summary contain the string "spam".
It appears that scanpy is among them, so you should be able to use it.
agent_executor.run("import the scanpy package and list available functions and classes.")
Result:
Help on package scanpy:
NAME
scanpy - Single-Cell Analysis in Python.
PACKAGE CONTENTS
__main__
_compat
_settings
_utils (package)
_version
cli
datasets (package)
experimental (package)
external (package)
get (package)
logging
metrics (package)
neighbors (package)
plotting (package)
preprocessing (package)
queries (package)
readwrite
sim_models (package)
testing (package)
tools (package)
SUBMODULES
pl
pp
tl
DATA
settings = <scanpy._settings.ScanpyConfig object>
VERSION
1.9.6
FILE
c:\dev\env\sandbox_311\lib\site-packages\scanpy\__init__.py
Since I got the same error you did initially, I tried this:
agent_executor.run("first import 'scanpy' and list its contents, then use 'scanpy' to plot a umap using the pbmc dataset")
And that actually works. AI is a bit dumb, so have to hold its hand like you would a slightly under-achieving graduate with a lot of potential.
However, this just gets you to the next roadblock: to generate the UMAP, it has to / wants to use igraph and that's actually not available in the environment. So, you'll have to figure out what available libraries could be used for this. Or perhaps tell the AI to do that for you as well...
You can tell gpt to use tools. Or it won't use it. In the environment of gpt, it doesnt have this lib.
Just like this:
agent_executor = initialize_agent(
tools=[PythonREPLTool()],
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
memory=memory,
set_debug=True,
)
st = agent_executor.invoke({"input": "I need to use Scanpy to read some mock data and provide me with the results. Please execute this using the Python REPL tool."})
The whole func will excute in your own computer but not openai.