Anthropic
docs.anthropic.com › en › api › complete
Create a Text Completion
Role namesThe Text Completions API expects alternating \n\nHuman: and \n\nAssistant: turns, but the Messages API expects user and assistant roles. You may see documentation referring to either “human” or “user” turns. These refer to the same role, and will be “user” going forward. With Text Completions, the model’s generated text is returned in the completion values of the response: ... >>> response = anthropic.completions.create(...) >>> response.completion " Hi, I'm Claude" With Messages, the response is the content value, which is a list of content blocks:
AWS
docs.aws.amazon.com › amazon bedrock › user guide › amazon bedrock foundation model information › inference request parameters and response fields for foundation models › anthropic claude models › anthropic claude text completions api
Anthropic Claude Text Completions API - Amazon Bedrock
Use the Text Completion API for single-turn text generation from a user supplied prompt. For example, you can use the Text Completion API to generate text for a blog post or to summarize text input from a user. For information about creating prompts for Anthropic Claude models, see Introduction to prompt design
Anthropic
docs.anthropic.com › claude › reference › complete_post
Create a Text Completion - Claude API Reference
For Text Completions, this is always "completion". ... curl https://api.anthropic.com/v1/complete \ -H 'Content-Type: application/json' \ -H "X-Api-Key: $ANTHROPIC_API_KEY" \ -d '{ "max_tokens_to_sample": 256, "model": "claude-opus-4-5-20251101", "prompt": "\\n\\nHuman: Hello, world!\\n\\nAssistant:" }'
Anthropic
console.anthropic.com › docs › en › api › kotlin › completions
Completions - Claude API Reference
API Keys · Usage Report · Cost Report · Completions · Create a Text Completion · Support & configuration · Rate limitsService tiersVersionsIP addressesSupported regionsOpenAI SDK compatibility · Console · Log in · API Reference · Completions · Copy page · completions().create(CompletionCreateParamsparams, RequestOptionsrequestOptions = RequestOptions.none()) : Completion ·
LiteLLM
docs.litellm.ai › supported models & providers › anthropic
Anthropic | liteLLM
LiteLLM supports MCP tool calling with Anthropic in the OpenAI Responses API format. ... import os from litellm import completion os.environ["ANTHROPIC_API_KEY"] = "sk-ant-..." tools=[ { "type": "mcp", "server_label": "deepwiki", "server_url": "https://mcp.deepwiki.com/mcp", "require_approval": "never", }, ] response = completion( model="anthropic/claude-sonnet-4-20250514", messages=[{"role": "user", "content": "Who won the World Cup in 2022?"}], tools=tools )
Anthropic
console.anthropic.com › docs › en › api › typescript › completions › create
Create a Text Completion - Claude API Reference
Create a Text Completion · TypeScript · import Anthropic from '@anthropic-ai/sdk'; const client = new Anthropic({ apiKey: 'my-anthropic-api-key', }); const completion = await client.completions.create({ max_tokens_to_sample: 256, model: 'claude-opus-4-5-20251101', prompt: '\n\nHuman: Hello, world!\n\nAssistant:', }); console.log(completion.id); Response 200 ·
Anthropic
docs.anthropic.com › en › api › migrating-from-text-completions-to-messages
Migrating from Text Completions - Anthropic
The Text Completions API expects alternating \n\nHuman: and \n\nAssistant: turns, but the Messages API expects user and assistant roles. You may see documentation referring to either “human” or “user” turns. These refer to the same role, and will be “user” going forward. With Text Completions, the model’s generated text is returned in the completion values of the response: ... >>> response = anthropic.messages.create(...) >>> response.content [{"type": "text", "text": "Hi, I'm Claude"}]
AWS
docs.aws.amazon.com › amazon bedrock › user guide › amazon bedrock foundation model information › inference request parameters and response fields for foundation models › anthropic claude models › anthropic claude messages api
Anthropic Claude Messages API - Amazon Bedrock
in the Anthropic Claude documentation. If you have existing Text Completion prompts that you want to migrate to the messages API, see Migrating from Text Completions
Anthropic
docs.anthropic.com › claude › reference › streaming
Streaming Text Completions - Claude - Anthropic
curl https://api.anthropic.com/v1/messages \ --header "x-api-key: $ANTHROPIC_API_KEY" \ --header "anthropic-version: 2023-06-01" \ --header "content-type: application/json" \ --data \ '{ "model": "claude-sonnet-4-5", "max_tokens": 20000, "stream": true, "thinking": { "type": "enabled", "budget_tokens": 16000 }, "messages": [ { "role": "user", "content": "What is 27 * 453?"
DataCamp
datacamp.com › tutorial › claude-sonnet-api-anthropic
Claude Sonnet 3.5 API Tutorial: Getting Started With Anthropic's API | DataCamp
June 26, 2024 - To learn more about Claude Sonnet and how it compares to ChatGPT, check out the articles below: ... The Text Completions API provides basic text completion functionality, while the Messages API offers more advanced features, such as the ability to have multi-turn conversations, incorporate ...
Decisions
documentation.decisions.com › step-library › docs › anthropic-chat-completion
Chat Completion
This step enables prompts to be submitted to Anthropic's large language model (LLM), which will return the response that the model provides. Users can either add prompts directly or pull them from the Flow. Additionally, users can select the specific model to review the prompt.
AWS
docs.aws.amazon.com › amazon bedrock › user guide › amazon bedrock foundation model information › inference request parameters and response fields for foundation models › anthropic claude models
Anthropic Claude models - Amazon Bedrock
In the inference call, fill the body field with a JSON object that conforms the type call you want to make, Anthropic Claude Text Completions API or Anthropic Claude Messages API.