Force LangChain agent to use a tool - Stack Overflow
[Project] Building Multi task AI agent with LangChain and using Aim to trace and visualize the executions
Very interesting project! Would it be fair to say that this effectively adds observability to LLM chains?
More on reddit.comTool Juggler: A Customizable AI Assistant Tool Manager Built on Langchain 🛠️
Hey AI enthusiasts! 👋
I'm excited to share a little project I've been working on called Tool Juggler. As someone who couldn't wait to dive into ChatGPT plugins, I decided to create my own solution for creating custom tools for AI assistant and adding them on-the-fly.
Tool Juggler is built on top of the Langchain library, and all custom tools are instances of the langchain.agent.Tool class. The platform aims to provide an easy way to create, upload, and manage these tools, giving you the power to extend your AI assistant's capabilities with ease.
What you'll need:
-
OpenAI API key (designed for GPT-4 models, but GPT-3.5 works too)
-
Docker installed on your computer
Tool Juggler is still in its alpha stage, so it's not without its rough edges. However, I thought it would be great to share it to see what you think.
I'm eager to hear your feedback! Your opinions will help me decide whether to continue developing Tool Juggler and in which direction to take the project. So, please share your thoughts and let me know if you find it useful or interesting!
Creating tools is simple, and the Tool Uploading Documentation provides a step-by-step guide. Also, you can easily auto-generate PDF tools by just dragging and dropping a PDF into the toolbox!
For a more in-depth look at Tool Juggler and its roadmap, check out the GitHub repo.
Thank you for your time, and I'm looking forward to hearing your thoughts and feedback! 🚀
More on reddit.comImproving GPT-3.5 Langchain Agents
Agreed. After playing around with LangChain for a different purpose, I also found that having different models, some with memory and some without, increased performance on my goal, which was more human-like responses. In general, as a rule, GPT 3.5 is an idiot. GPT 4 is better but so, so, so much slower that it doesn't pay to employ it in anything but the most dire of scenarios. So I do all the preprocessing in 3.5 and then generation of final output in 4.
What language(s) are you working with? You seem like the sort of person whose work I'd be interested in following.
More on reddit.comVideos
My use case is this:
I will have a well formatted requirement as the initial prompt. Based on the content, I want other calls to the llm to be made, to handle smaller pieces of logic, e.g description="Useful when dealing with xyz", tool would be xyz tool, which takes the relevant information and then does more llm work.
I feel like I've been through all of the readthedocs and can't piece together how to achieve it.
Any ideas or pointers?
When running an agent query, you can explicitly mention about the custom tool.
agent(
"Using Custom Tool please calculate result of 5"
)
Also, in your _run() function, you add custom log for tracking when custom tool is been called.
Most models support tool_choice="any"
- OpenAI
- MistralAI
- FireworksAI
- Groq
always_call_tool_llm = llm.bind_tools([add, multiply], tool_choice="any")
ref
UPDATE
Solution mentioned above is quite brute force. I figured that it pays off to improve tool description to make calls more consistent.
Good extra description that so far did not failed me is:
"Use tool when phrases like 'Whats new', 'Tell me the news' are used. "
You can extend it with more example phrases and so far it did not failed me. Tested with groq, mistral and openai.