/v1/chat/completions
Mistral AICreates a model response for the given chat conversation using Mistral Large.
Set to `application/json`.
Bearer token authentication with your Mistral API key.
ID of the model to use.
Example: mistral-large-latest
Nucleus sampling parameter.
Defaults to: 1
Example: 1
Whether to stream back partial progress.
Defaults to: false
The list of messages in the chat.
The maximum number of tokens to generate.
Example: 150
Controls randomness in the response.
Defaults to: 0.7
Example: 0.7
A unique identifier for the chat completion.
The model used for the chat completion.
The object type, which is always "chat.completion".
The Unix timestamp (in seconds) of when the chat completion was created.
curl https://api.mistral.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_MISTRAL_API_KEY" \
-d '{
"model": "mistral-large-latest",
"messages": [
{"role": "user", "content": "What is the best French cheese?"}
],
"temperature": 0.7,
"max_tokens": 150
}'
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
api_key = "YOUR_MISTRAL_API_KEY"
model = "mistral-large-latest"
client = MistralClient(api_key=api_key)
messages = [
ChatMessage(role="user", content="What is the best French cheese?")
]
chat_response = client.chat(
model=model,
messages=messages,
temperature=0.7,
max_tokens=150
)
print(chat_response.choices[0].message.content)
{
"id": "cmpl-e5cc70bb28c444948073e77776eb30ef",
"object": "chat.completion",
"created": 1702256327,
"model": "mistral-large-latest",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "There are many excellent French cheeses, each with its own unique characteristics. Some of the most renowned include Roquefort (a blue cheese), Camembert (soft and creamy), Brie (mild and buttery), Comté (hard aged cheese), and Chèvre (goat cheese). The 'best' really depends on personal taste preferences!"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 65,
"total_tokens": 80
}
}