🌐
Langchain
reference.langchain.com › javascript › langchain-langgraph › index › LangGraphRunnableConfig
LangGraphRunnableConfig | @langchain/langgraph | LangChain Reference
interface LangGraphRunnableConfig · RunnableConfig<ContextType>Partial<Runtime<ContextType, unknown, unknown>> Going to production · Streaming · Use the functional API · Arun_nameAmax_concurrencyArecursion_limitArun_id · View source on GitHub · Version History ·
🌐
LangChain
langchain-ai.github.io › langgraphjs › reference › interfaces › langgraph.LangGraphRunnableConfig.html
LangGraphRunnableConfig | LangGraph.js API Reference
interface LangGraphRunnableConfig< ContextType extends Record<string, any> = Record<string, any>, > { callbacks?: Callbacks; configurable?: ContextType; context?: ContextType; interrupt?: (value: unknown) => unknown; maxConcurrency?: number; metadata?: Record<string, unknown>; recursionLimit?: number; runId?: string; runName?: string; signal?: AbortSignal; store?: BaseStore; tags?: string[]; timeout?: number; writer?: (chunk: unknown) => void; }
Discussions

Langgraph RunnableConfig is not getting passed to callable
Checked other resources This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/). I added a clear and detailed title that summarizes the iss... More on github.com
🌐 github.com
6
July 29, 2025
Passing Config to LangGraph Tool Discrepancies
Same here, commenting to get notifications More on reddit.com
🌐 r/LangChain
2
2
August 15, 2024
How to view RunnableConfig state in LangGraph traces?
I invoke my workflow like this: ... view the following in the trace: LangGraph node executions Tool calls and their inputs/outputs Overall agent flow However I wanted to know if there’s any way to get visibility into the RunnableConfig object that’s being passed through ... More on forum.langchain.com
🌐 forum.langchain.com
0
1
October 22, 2025
Issue with from langchain_core.runnables import RunnableConfig
The code run AS IS to reproduce ... checkpoint:langgraph/checkpoint/memory/__init__.py import logging import os import pickle import random import shutil from collections import defaultdict from collections.abc import AsyncIterator, Iterator, Sequence from contextlib import AbstractAsyncContextManager, AbstractContextManager, ExitStack from types import TracebackType from typing import Any, Optional, Union from langchain_core.runnables import RunnableConfig from ... More on github.com
🌐 github.com
4
July 11, 2025
🌐
LangChain
langchain-ai.github.io › langgraphjs › how-tos › configuration
How to add runtime configuration to your graph
In LangGraph, configuration and other "out-of-band" communication is done via the RunnableConfig, which is always the second positional arg when invoking your application.
🌐
GitHub
github.com › langchain-ai › langgraph › issues › 5698
Langgraph RunnableConfig is not getting passed to callable · Issue #5698 · langchain-ai/langgraph
July 29, 2025 - import operator from typing import Optional from langgraph.graph import StateGraph, END, START from langchain_core.runnables.config import RunnableConfig from typing import TypedDict, List, Annotated from langgraph.graph.message import add_messages class State(TypedDict): notes: Annotated[List[str], operator.add] class A_Node: def __init__(self, name:str): self.name = name async def __call__(self, state:State, config: Optional[RunnableConfig] = None, **kwargs): print(f"A_Node: {self.name} called, state: {state}, config: {config}") return {"notes": ["node_a"]} class B_Node: def __init__(self, n
Author   Boomi(boomi-moodys)
🌐
Reddit
reddit.com › r/langchain › passing config to langgraph tool discrepancies
r/LangChain on Reddit: Passing Config to LangGraph Tool Discrepancies
August 15, 2024 -

[Solved]: config type in tool must be of type RunnableConfig only. Not RunnableConfig | None, not Optional[RunnableConfig]. Maybe langchain team can modify their _get_runnable_config_param function to do additional checks for flexibility to cater for this (reasoning in comments)

Issue

All my tools are created using BaseTool.

LangGraph documentation says I can invoke the graph with a configurable and it will be accessible in the tool's _run method's config Runnable. However, the value seems to be None.

I created an additional tool using the @tool decorator and added it to the tools array for the graph, and noticed that the config was actually accessible there, with the specified values.

I even log the config in the different node runnables throughout the graph and see that it is present. So not sure why it isn't showing up in the _run for the base tool.

On a phone, but it was something like this:

@tool
def my_tool(param1: int, config: RunnableConfig):
    print("Config is", config, type(config))

# I get "Config is" {...really large dict...}, <dict>

tools=[my_tool]
class MyTool(BaseTool):
   ...
   def _run(
        param1: int,
        param2: string,
        config: RunnableConfig
    ):
        print("Config is", config, type(config))

# I get "Config is" None, <NoneType>
tools=[MyTool()] #no worries, it is properly instatiated and sent to the graph.

I use langgraph 0.1.17 (I forgot the correct number, but it's the last one before 0.2 due to breaking changes) but I doubt it's that, since BaseTool is from langchain_community, of which I have the latest version.

Edit: I call my graph using graph.astream_events. Not sure if that causes the difference, but including that knowledge here regardless. My BaseTool does not have a matching _arun method to go with all the async functions though, but I'm not sure that is why it does not show the config.

Top answer
1 of 2
2
Same here, commenting to get notifications
2 of 2
1
Did additional digging into the actual langchain_core source-code itself to trace the potential issue. langchain_core/tools.py: def _get_runnable_config_param(func: Callable) -> Optional[str]: type_hints = _get_type_hints(func) print(" \n\n\n----------------------------In _get_runnable_config_param: ", type_hints, type(type_hints)) if not type_hints: return None for name, type_ in type_hints.items(): print(name, type_) if type_ is RunnableConfig: return name return None Console: ----------------------------In _get_runnable_config_param: {'query': , 'args': typing.Any, 'run_manager': typing.Optional[langchain_core.callbacks.manager.CallbackManagerForToolRun], 'config': typing.Optional[langchain_core.runnables.config.RunnableConfig], 'config2': langchain_core.runnables.config.RunnableConfig | None, 'kwargs': typing.Any, 'return': typing.Union[str, typing.Sequence[typing.Dict[str, typing.Any]], sqlalchemy.engine.result.Result]} query args typing.Any run_manager typing.Optional[langchain_core.callbacks.manager.CallbackManagerForToolRun] config typing.Optional[langchain_core.runnables.config.RunnableConfig] config2 langchain_core.runnables.config.RunnableConfig | None kwargs typing.Any return typing.Union[str, typing.Sequence[typing.Dict[str, typing.Any]], sqlalchemy.engine.result.Result] config_param is None It make me check my code signature: def _run( self, query: str, *args: Any, run_manager: Optional[CallbackManagerForToolRun] = None, config: Optional[RunnableConfig] = None, config2: RunnableConfig | None = None, # (added this one for further testing) **kwargs: Any, ) -> It seems like the expected parameter needs to be RunnableConfig only! Not Optional[RunnableConfig], not RunnableConfig | None. Just RunnableConfig alone. I mean, technically that makes sense. The langchain engine could more than possibly guarantee that the config file will be passed: if signature(func_to_check).parameters.get("run_manager"): tool_kwargs["run_manager"] = run_manager if config_param := _get_runnable_config_param(func_to_check): tool_kwargs[config_param] = config print("config_param is", config_param) coro = context.run(self._arun, *tool_args, **tool_kwargs) if accepts_context(asyncio.create_task): response = await asyncio.create_task(coro, context=context) # type: ignore else: response = await coro So maybe I should just specify the type as RunnableConfig alone without trying to consider for None? Or would it be more convenient for langchain's team to check not only if type_ is RunnableConfig, but also cases where the type is an Optional including RunnableConfig and None, as well as | (pipe Operator) that includes None and RunnableConfig? Reasoning? Maybe I have one BaseTool definition and I use it externally (eg: unit tests) and pass my own config values (dicts from a parsed json that isn't bound to RunnableConfig), but I also want that BaseTool to be used with the graph, so it should support a config that is either a RunnableConfig or another data type? The tool itself is in charge of determining what to do depending on the type of config, in this case, langchain should be flexible and pass the config if RunnableConfig is one of the expected types for that parameter?
🌐
Langchain
docs.langchain.com › oss › python › langgraph › graph-api
Graph API overview - Docs by LangChain
In LangGraph, nodes are Python functions (either synchronous or asynchronous) that accept the following arguments: ... config—A RunnableConfig object that contains configuration information like thread_id and tracing information like tags
🌐
LangChain Forum
forum.langchain.com › langsmith product help › observability & evals
How to view RunnableConfig state in LangGraph traces? - Observability & Evals - LangChain Forum
October 22, 2025 - I’m working with LangGraph’s prebuilt create_react_agent and using LangSmith for tracing. I invoke my workflow like this: result = await workflow.ainvoke(agent_input, config=config) As expected, I can view the followin…
Find elsewhere
🌐
GitHub
github.com › langchain-ai › langgraph › issues › 5448
Issue with from langchain_core.runnables import RunnableConfig · Issue #5448 · langchain-ai/langgraph
July 11, 2025 - This "from langchain_core.runnables import RunnableConfig" does not support latest version of langgraph which give import error along with even installed langgraph-checkpoint lib with or without same error.
Author   SHUBHAM GUVANTA GABHIYE(shubhamgajbhiye1994)
🌐
GitHub
github.com › langchain-ai › langgraph › issues › 5023
LangGraph `config.configurable` -> `context` API -- feedback wanted! · Issue #5023 · langchain-ai/langgraph
June 9, 2025 - # Current approach def my_node(state: StateSchema, config: RunnableConfig): user_id = config["configurable"]["user_id"] # Deep nesting # ... # New approach class Runtime(Generic[ContextT]): context: ContextT config: LangGraphConfig # Cleaner config with top-level properties @property def stream_writer(self) -> StreamWriter: """Access streaming utilities without complex injection into node signatures.""" def my_node(state: StateSchema, runtime: Runtime): user_id = runtime.context.user_id # Clean, typed access thread_id = runtime.config.thread_id # No more nesting under "configurable" # ...
Author   Sydney Runkle(sydney-runkle)
🌐
LangChain
api.python.langchain.com › en › latest › runnables › langchain_core.runnables.config.RunnableConfig.html
langchain_core.runnables.config.RunnableConfig — 🦜🔗 LangChain 0.2.17
RunnableConfig.run_id · class langchain_core.runnables.config.RunnableConfig[source]¶ · Configuration for a Runnable. tags: List[str]¶ · Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls. metadata: Dict[str, Any]¶ ·
🌐
LangChain Forum
forum.langchain.com › langsmith product help › deployment
How to access langgraph_auth_user (RunnableConfig) inside a Middleware or Node? - Deployment - LangChain Forum
December 10, 2025 - Hi everyone, I am deploying an app using LangGraph Platform and have successfully implemented custom authentication following the official documentation. Current Situation: I can successfully access the authenticated user information inside my Tools using the runtime object.