/v1/messages
AnthropicSend a structured list of input messages with text and/or image content, and the model will generate the next message in the conversation.
Set to `application/json`.
Your Anthropic API key.
API version to use.
The model that will complete your prompt.
Example: claude-3-opus-20240229
Use nucleus sampling.
Example: 0.9
Input messages for the conversation.
The maximum number of tokens to generate before stopping.
Example: 1024
Amount of randomness injected into the response.
Defaults to: 1
Example: 0.7
Custom text sequences that will cause the model to stop generating.
Unique object identifier.
Conversational role, always "assistant".
Object type, always "message".
The model that handled the request.
The reason that we stopped.
Which custom stop sequence was generated, if any.
curl https://api.anthropic.com/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-3-opus-20240229",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude"}
]
}'
import anthropic
client = anthropic.Anthropic(
api_key="YOUR_ANTHROPIC_API_KEY",
)
message = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude"}
]
)
print(message.content[0].text)
{
"id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello! It's nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?"
}
],
"model": "claude-3-opus-20240229",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 10,
"output_tokens": 25
}
}