🌐
GitHub
github.com › anthropics › claude-code › issues › 760
[BUG] API Error: 400 Input is too long for requested model. · Issue #760 · anthropics/claude-code
April 10, 2025 - Environment Platform (select one): Anthropic API AWS Bedrock Google Vertex AI Other: Claude CLI version: 0.2.67 Operating System: macOs 15.3.2 Terminal: Warp Bug Description Getting this error after an attempt of reading the large file, ...
Published   Apr 10, 2025
🌐
AWS re:Post
repost.aws › questions › QUshd0uzCZRAy1TbudkUKhww › claude-on-bedrock-giving-input-is-too-long-for-requested-model-for-10k-token-inputs-edit-broken-in-eu-central-1-working-in-other-regions
Claude on Bedrock giving "Input is too long for requested model" for ~10k token inputs (edit: broken in eu-central-1, working in other regions) | AWS re:Post
October 27, 2023 - Ideally that error message is a bit better, like "Your input tokens is 70,000 and you've asked for 90,000 back, this exceeds the model's capability." ... So i re-tested this last night in eu-central-1 and managed to send about 70k tokens through and it worked so think someone might have fixed it. Let me know if someone else is able to double-check this. ... @DaveM Reproduction case in first post still fails for me in Frankfurt using Claude v2 via the Chat Playground in Bedrock UI.
🌐
Stack Overflow
stackoverflow.com › questions › 78836086 › why-do-i-get-an-error-saying-input-is-too-long-for-requested-model-with-my-cla
python - Why do I get an error saying 'Input is too long for requested model' with my Claude API call on AWS Bedrock? - Stack Overflow
response = client_bedrock.invoke_model(modelId=anthropic_modelid, contentType='application/json',accept='application/json',body=json.dumps(anthropic_payload)) Where "promptMessages" in an array of messages with the first message including a "<document> ....</document> that is a text representation of a number of different file formats uploaded to S3 · I tried a truncated string length and it worked; the full text fails. ... "API Error: 400 invalid beta flag" when trying to use Claude Code with Bedrock using claude 3.5 haiku
🌐
GitHub
github.com › RooCodeInc › Roo-Code › issues › 772
Claude 3.5 Token Limit Error (400) Prevents Workflow Continuation · Issue #772 · RooCodeInc/Roo-Code
February 4, 2025 - This issue is also present in the original Cline project (cline/cline#1275). When an API request exceeds the maximum token limit (200k tokens), the system returns a 400 error as expected.
Published   Feb 04, 2025
🌐
GitHub
github.com › anthropics › claude-code › issues › 558
API Error: 400 while using the Claude Code in my Terminal · Issue #558 · anthropics/claude-code
March 19, 2025 - Hi there I have been facing the same error "API Error: 400" for the last three days. ⎿ API Error: 400 I have taken every single step to solve this issue, but nothing has worked. My Anthro...
Published   Mar 19, 2025
🌐
Claude
docs.claude.com › en › api › errors
Errors - Claude Docs
400 - invalid_request_error: There was an issue with the format or content of your request. We may also use this error type for other 4XX status codes not listed below. 401 - authentication_error: There’s an issue with your API key.
🌐
GitHub
github.com › anthropics › claude-code › issues › 5220
Low Context Management in Claude Code CLI · Issue #5220 · anthropics/claude-code
August 6, 2025 - Please use pagination, filtering, or limit parameters to reduce the response size.\n at zN0 (file:///Users/user/.claude/local/node_modules/@anthropic-ai/claude-code/cli.js:1295:7872)\n at process.processTicksAndRejections (node:internal/pro...
Published   Aug 06, 2025
🌐
Portkey
portkey.ai › error-library › input-length-error-10153
[Solved] Prompt is too long: the number of tokens exceeds the maximum allowed limit.
Token Limit Exceeded: The prompt contains more tokens than the maximum allowed by the Anthropic API. Each API has a specified limit on the number of tokens that can be processed in a single request, and exceeding this limit results in a 400 error.
🌐
Stack Overflow
stackoverflow.com › questions › 79683907 › error-in-the-name-of-claude-4-sonnet-while-using-in-cursor-as-custom-model
Error in the name of Claude 4 sonnet while using in cursor as custom model - Stack Overflow
Since you are getting 400, it means the provider your Cursor is pointing at is not recognising the model string which is claude-4-sonnet-20250522 in your case. Figure out the correct model string based on your provider and try again.
Find elsewhere
🌐
Stack Overflow
stackoverflow.com › questions › 79632861 › api-error-400-invalid-beta-flag-when-trying-to-use-claude-code-with-bedrock-u
"API Error: 400 invalid beta flag" when trying to use Claude Code with Bedrock using claude 3.5 haiku - Stack Overflow
Using 3.7 sonnet works fine $ ANTHROPIC_MODEL='us.anthropic.claude-3-7-sonnet-20250219-v1:0' claude -p hi Hello! How can I help you today? However, using 3.5 haiku results in a cryptic error $
🌐
X
x.com › gecko655 › status › 1945443019383333165
gecko655 on X: "Claude Code くん、 "API Error: 400 Input is too long for requested model." もなんとか自動でリトライしてくれないかな…" / X
Claude Code くん、 "API Error: 400 Input is too long for requested model." もなんとか自動でリトライしてくれないかな…Translate post
🌐
n8n
community.n8n.io › help me build my workflow
Input is too long for requested model error - Help me Build my Workflow - n8n Community
August 14, 2025 - The model returned the following errors: Input is too long for requested model. i am getting this error when the MCP client node is trying to fetch some detail and the output is too long thus the bedrock is unable to pro…
🌐
X
x.com › headinthebox › status › 1894235434202665104
Erik Meijer on X: "If you are using Claude Code, make sure to hit /clear early and often. Otherwise you will be burning a lot of tokens. API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"input length and `max_tokens` exceed context limit: 185054 + 20000 > 204798," / X
If you are using Claude Code, make sure to hit /clear early and often. Otherwise you will be burning a lot of tokens. API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"input length and `max_tokens` exceed context ...
🌐
GitHub
github.com › anthropics › claude-code › issues › 476
[BUG} API Error: 400 'input length and `max_tokens` exceed context limit' occurs when close to compaction · Issue #476 · anthropics/claude-code
March 14, 2025 - API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"input length and max_tokens exceed context limit: 186433 + 20000 > 200000, decrease input ... Error not thrown and handled gracefully, either with meaningful error message with user next steps, or transparently. I think i would also expect that claude code also prompts at suitable times to compact.
Published   Mar 14, 2025
Top answer
1 of 3
11

As the error says, you must provide the ID of an inference profile and not the model for this particular model. The easiest way to do this is to provide the ID of a system-defined inference profile for this model. You can find it by invoking this awscli command with the correct credentials defined in the environment (or set via standard flags):

aws bedrock list-inference-profiles

You will see this one in the JSON list:

{
  "inferenceProfileName": "US Anthropic Claude 3.5 Sonnet v2",
  "description": "Routes requests to Anthropic Claude 3.5 Sonnet v2 in us-west-2, us-east-1 and us-east-2.",
  "inferenceProfileArn": "arn:aws:bedrock:us-east-1:381492273274:inference-profile/us.anthropic.claude-3-5-sonnet-20241022-v2:0",
  "models": [
    {
      "modelArn": "arn:aws:bedrock:us-west-2::foundation-model/anthropic.claude-3-5-sonnet-20241022-v2:0"
    },
    {
      "modelArn": "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-5-sonnet-20241022-v2:0"
    },
    {
      "modelArn": "arn:aws:bedrock:us-east-2::foundation-model/anthropic.claude-3-5-sonnet-20241022-v2:0"
    }
  ],
  "inferenceProfileId": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
  "status": "ACTIVE",
  "type": "SYSTEM_DEFINED"
}

Modify the invoke_model line in your code to specify the ID or ARN of the inference profile instead:

response = bedrock_runtime.invoke_model(
  body=body,
  modelId="us.anthropic.claude-3-5-sonnet-20241022-v2:0",
)
2 of 3
0

You can add the ARN of an Inference profile to your modelID itself while invoking the model.

response = bedrock_client.invoke_model(

       modelId="arn:aws:bedrock:us-east-1::model/your-bedrock-model-arn"  

       prompt="Your prompt here"

   )
🌐
Portkey
portkey.ai › error-library › input-length-error-10001
Portkey | Control Panel for Production AI
With 3 lines of code, Portkey empowers AI teams to observe, govern, and optimize your apps across the entire org.
🌐
GitHub
github.com › anthropics › claude-code › issues › 10199
[BUG] API Error 400 - Thinking Block Modification Error · Issue #10199 · anthropics/claude-code
October 23, 2025 - These blocks must remain as they were in the original response."},"request_id":"req_011CUQZ6ttKtJbU8JQRYyizo"} ### Error Repetition Pattern Line 14957: User asks question → API Error 400 Line 14961: User asks same question → API Error 400 (same request) Line 14965: User asks same question → API Error 400 (same request) Line 14969: User references the error itself → API Error 400 Line 14974: User asks to update memory files → API Error 400 ### Technical Details **API Endpoint**: Anthropic Claude API (claude.code integration) **Error Type**: `invalid_request_error` **Error Message Pattern**: messages.{N}.content.{M}: `thinking` or `redacted_thinking` blocks in the latest assistant message cannot be modified.
Published   Oct 23, 2025
🌐
Reddit
reddit.com › r/roocode › api streaming failed > input is too long for requested model.
r/RooCode on Reddit: API Streaming Failed > Input is too long for requested model.
March 29, 2025 -

I don't understand how to fix this issue.

I understand the issue, but I don't know how to stop Roo from repeating the same prompt repeatedly or how to modify the prompt it's trying to send.

I've tried to switch models. But still get the same error. This is with claude 3-7.