Create chat completion
Chat Completions API
Generate AI chat completions using various models with support for multiple payment methods.
Payment Methods
1. Credit-Based Payments (Traditional)
Pre-fund your account and pay per request. Costs are deducted from your balance automatically.
- Simple Setup: Add funds to your account
- Instant Processing: No additional payment verification needed
- Predictable Billing: Pre-pay for usage
2. x402 Cryptocurrency Payments
Pay for requests in real-time using cryptocurrency without pre-funding accounts.
- Supported Assets: USDC
- Networks: Base
- Protocol: x402 standard for AI micropayments
- Benefits: No account funding, transparent pricing, crypto-native experience
Cost Calculation
Costs are calculated based on:
- Input Tokens: Text you send to the model
- Output Tokens: Text generated by the model
- Model Pricing: Different models have different rates
Error Handling
The API handles various error scenarios:
- 402 Payment Required: Insufficient balance or invalid x402 payment
- 429 Rate Limited: Too many requests
- 400 Bad Request: Invalid request parameters
- 500 Server Error: Internal processing errors
Enter your JWT token in the format: Bearer
In: header
Primary model identifier or alias (provider/name).
1 <= length
Optional ordered list of fallback models to try if the primary one is unavailable.
Conversation history supplied to the model.
Maximum number of tokens to generate in the response.
Sampling temperature (0-2). Higher values increase creativity.
If true, responses are streamed via Server-Sent Events.
Strategy for selecting implementations when resolving aliases.
"cheapest" | "fastest" | "balanced"
Encoded x402 payment payload for crypto-backed requests.
Tools the model may invoke during the conversation.
Forces or disables tool usage.
Allow the model to request multiple tool calls simultaneously.
Nucleus sampling probability mass (0-1).
Penalize new tokens based on existing frequency (−2 to 2).
Penalize new tokens based on presence in text so far (−2 to 2).
Sequences where the model should stop generating further tokens.
Seed for deterministic sampling when supported by the provider.
Provider-specific structured response format hints.
Request token log probabilities when supported.
Number of top log probability tokens to include if logprobs is true.
Optional end-user identifier forwarded for moderation.
Response Body
curl -X POST "https://api-beta.daydreams.systems/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"max_tokens": 150,
"temperature": 0.7
}'
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1699896901,
"model": "openai/gpt-4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking. How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 19,
"total_tokens": 31,
"cost_usd": 0.000031,
"platform_fee_usd": 0.0000047,
"payment_method": "credits"
}
}
{
"error": "Invalid model specified",
"code": "INVALID_MODEL",
"message": "The specified model is not available"
}
{
"error": "Invalid API key",
"code": "INVALID_API_KEY",
"message": "The provided API key is not valid"
}
{
"error": "Insufficient balance",
"code": "INSUFFICIENT_BALANCE",
"message": "Account balance is too low for this operation",
"current_balance": 1.25,
"required_amount": 5
}
{
"error": "Rate limit exceeded",
"code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests. Please try again later.",
"retry_after": 60
}
{
"error": "Internal server error",
"code": "INTERNAL_SERVER_ERROR",
"message": "An unexpected error occurred while processing your request"
}