Skip to content

FAQ

General

How is DevLens different from using OpenAI or Anthropic directly?

DevLens is a model gateway. It provides a single OpenAI-compatible endpoint that routes to multiple providers. Benefits: one API key for all models, flat-rate pricing, and automatic failover across upstream channels.

Which models are supported?

Claude, GPT, Gemini, DeepSeek, Qwen, Minimax, and others — 20+ models total. See the model reference for identifiers.

Is the API fully OpenAI-compatible?

Yes. Chat Completions, Completions, Embeddings, and Models endpoints are all supported. Streaming and function calling work as expected.

Usage

How do I switch models?

Change the model parameter. Same key, same endpoint.

python
client.chat.completions.create(model="gpt-5", ...)
client.chat.completions.create(model="claude-sonnet-4-5-20250929", ...)

401 Unauthorized

  1. Verify the key starts with sk-
  2. Confirm the key is enabled in Console → API Keys
  3. Check the Authorization header format: Bearer sk-xxx

429 Too Many Requests

Either rate-limited or insufficient balance. Check your account balance and reduce request frequency. Contact admin to adjust limits if needed.

Streaming not working

  1. Set "stream": true in the request body
  2. Ensure your client handles SSE (Server-Sent Events) correctly
  3. Verify no middleware (e.g., Nginx) is buffering the response

Billing

Where can I see my usage?

Console → Logs. Each request shows model, token count, and cost.

Are refunds available?

Credits are non-refundable. Top up incrementally.

Is there a cap on referral rewards?

No cap. Each successful referral grants $20 to both accounts.

Unified AI Model Gateway