Skip to main content

Rate Limits

To ensure fair usage and system stability, the Tokencraft API implements rate limiting on all endpoints.

Current Limits

100 requests per minute per API token This limit applies to all endpoints collectively. Whether you’re calling one endpoint 100 times or spreading requests across multiple endpoints, the total cannot exceed 100 requests per minute.

How It Works

Rate limits are calculated using a sliding window:
  • Window: 60 seconds (1 minute)
  • Limit: 100 requests
  • Reset: Rolling/sliding window
Example timeline:
12:00:00 - Request 1-50 (50 requests)
12:00:30 - Request 51-100 (50 requests) - Limit reached
12:00:31 - Request 101 - ❌ 429 Rate Limit Exceeded
12:01:00 - Request 1-50 from 12:00:00 expire
12:01:00 - New requests allowed (50 available)

Rate Limit Headers

Every API response includes rate limit information in the headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1633024800
HeaderDescription
X-RateLimit-LimitMaximum requests allowed per window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the window resets

When Limit Is Exceeded

When you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1633024800

{
  "error": "Rate limit exceeded. Please try again later."
}
What to do:
  1. Wait until the reset time
  2. Implement exponential backoff
  3. Reduce request frequency

Checking Rate Limits

JavaScript Example

async function makeApiRequest(url, options) {
  const response = await fetch(url, options);
  
  const limit = response.headers.get('X-RateLimit-Limit');
  const remaining = response.headers.get('X-RateLimit-Remaining');
  const reset = response.headers.get('X-RateLimit-Reset');
  
  console.log(`Rate Limit: ${remaining}/${limit}`);
  console.log(`Resets at: ${new Date(reset * 1000).toISOString()}`);
  
  if (response.status === 429) {
    const waitTime = (reset * 1000) - Date.now();
    console.log(`Rate limit exceeded. Wait ${waitTime}ms`);
    throw new Error('Rate limit exceeded');
  }
  
  return response.json();
}

Python Example

import requests
from datetime import datetime

def make_api_request(url, headers):
    response = requests.get(url, headers=headers)
    
    limit = response.headers.get('X-RateLimit-Limit')
    remaining = response.headers.get('X-RateLimit-Remaining')
    reset = int(response.headers.get('X-RateLimit-Reset', 0))
    
    print(f'Rate Limit: {remaining}/{limit}')
    print(f'Resets at: {datetime.fromtimestamp(reset).isoformat()}')
    
    if response.status_code == 429:
        wait_time = reset - int(datetime.now().timestamp())
        print(f'Rate limit exceeded. Wait {wait_time}s')
        raise Exception('Rate limit exceeded')
    
    response.raise_for_status()
    return response.json()

Handling Rate Limits

1. Exponential Backoff

Retry with increasing delays:
async function fetchWithBackoff(url, options, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      const response = await fetch(url, options);
      
      if (response.status !== 429) {
        return await response.json();
      }
      
      // Calculate wait time from header
      const reset = response.headers.get('X-RateLimit-Reset');
      const waitTime = (reset * 1000) - Date.now();
      
      console.log(`Rate limited. Waiting ${waitTime}ms...`);
      await sleep(waitTime + 1000); // Add 1s buffer
      
    } catch (error) {
      if (i === maxRetries - 1) throw error;
      
      // Exponential backoff: 2^i seconds
      const backoffTime = Math.pow(2, i) * 1000;
      await sleep(backoffTime);
    }
  }
  
  throw new Error('Max retries exceeded');
}

function sleep(ms) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

2. Request Queuing

Queue requests to respect rate limits:
class RateLimiter {
  constructor(maxRequests = 100, windowMs = 60000) {
    this.maxRequests = maxRequests;
    this.windowMs = windowMs;
    this.requests = [];
  }

  async acquire() {
    const now = Date.now();
    
    // Remove expired requests
    this.requests = this.requests.filter(
      time => now - time < this.windowMs
    );

    // Check if we're at the limit
    if (this.requests.length >= this.maxRequests) {
      const oldestRequest = this.requests[0];
      const waitTime = this.windowMs - (now - oldestRequest);
      
      console.log(`Rate limit reached. Waiting ${waitTime}ms...`);
      await sleep(waitTime + 100); // Add small buffer
      
      return this.acquire(); // Recursive call
    }

    this.requests.push(now);
  }

  async execute(fn) {
    await this.acquire();
    return fn();
  }
}

// Usage
const limiter = new RateLimiter(100, 60000);

async function fetchWorkspaces() {
  return limiter.execute(async () => {
    const response = await fetch(url, options);
    return response.json();
  });
}

3. Request Batching

Combine multiple operations into single requests:
// ❌ Bad - 100 separate requests
for (const id of tokenIds) {
  await fetchToken(id);
}

// ✅ Good - 1 request to get all tokens
const allTokens = await fetchAllTokens(tokensetId, modeId);
const filteredTokens = allTokens.filter(t => tokenIds.includes(t.id));

4. Caching

Cache responses to reduce API calls:
class TokenCache {
  constructor(ttl = 300000) { // 5 minutes default
    this.cache = new Map();
    this.ttl = ttl;
  }

  get(key) {
    const item = this.cache.get(key);
    if (!item) return null;
    
    if (Date.now() - item.timestamp > this.ttl) {
      this.cache.delete(key);
      return null;
    }
    
    return item.data;
  }

  set(key, data) {
    this.cache.set(key, {
      data,
      timestamp: Date.now()
    });
  }
}

const cache = new TokenCache();

async function fetchWithCache(url, options) {
  // Check cache first
  const cached = cache.get(url);
  if (cached) {
    console.log('Cache hit');
    return cached;
  }
  
  // Fetch from API
  const response = await fetch(url, options);
  const data = await response.json();
  
  // Store in cache
  cache.set(url, data);
  
  return data;
}

Best Practices

1. Monitor Your Usage

Track rate limit headers in your application:
let rateLimitWarningLogged = false;

function checkRateLimit(headers) {
  const remaining = parseInt(headers.get('X-RateLimit-Remaining'));
  const limit = parseInt(headers.get('X-RateLimit-Limit'));
  
  const percentUsed = ((limit - remaining) / limit) * 100;
  
  if (percentUsed > 80 && !rateLimitWarningLogged) {
    console.warn(`Warning: ${percentUsed.toFixed(0)}% of rate limit used`);
    rateLimitWarningLogged = true;
  }
  
  if (percentUsed < 20) {
    rateLimitWarningLogged = false;
  }
}

2. Use Multiple Tokens

For high-volume applications, use separate tokens:
const tokens = [
  process.env.TOKENCRAFT_TOKEN_1,
  process.env.TOKENCRAFT_TOKEN_2,
  process.env.TOKENCRAFT_TOKEN_3,
];

let currentTokenIndex = 0;

function getNextToken() {
  const token = tokens[currentTokenIndex];
  currentTokenIndex = (currentTokenIndex + 1) % tokens.length;
  return token;
}

3. Optimize Request Patterns

// ❌ Bad - Many sequential requests
const workspace = await getWorkspace(id);
const tokensets = await getTokensets(id);
for (const ts of tokensets) {
  const tokens = await getTokens(ts.id);
}

// ✅ Good - Minimize requests
const [workspace, tokensets] = await Promise.all([
  getWorkspace(id),
  getTokensets(id)
]);

// Batch token requests
const allTokens = await Promise.all(
  tokensets.map(ts => getTokens(ts.id))
);

4. Handle Gracefully

Always handle rate limit errors:
try {
  const data = await fetchData();
} catch (error) {
  if (error.status === 429) {
    // Show user-friendly message
    showNotification('Too many requests. Please wait a moment...');
    // Retry after delay
    setTimeout(() => retryFetch(), 60000);
  } else {
    throw error;
  }
}

Rate Limit Increases

Need higher limits? Contact us at [email protected] with:
  • Your use case
  • Expected request volume
  • API token ID
We’ll review and can adjust limits for legitimate high-volume use cases.

Monitoring

Track Your Usage

# Check current rate limit status
curl -I -H "Authorization: Bearer YOUR_TOKEN" \
  https://app.tokencraft.dev/api/v1/workspaces

# Look for headers:
# X-RateLimit-Limit: 100
# X-RateLimit-Remaining: 95
# X-RateLimit-Reset: 1633024800

Set Up Alerts

if (remaining < 10) {
  // Send alert
  console.error('Rate limit nearly exhausted!');
  notifyTeam('Rate limit warning');
}

Next Steps