Have an LLM provider you'd like benchmarked on InferenceLatency.com? Submit your provider details below and we'll test its latency alongside OpenAI, Anthropic, Groq, and OpenRouter.
Note: We'll only store your provider name and endpoint URL. API keys are used once for testing and never stored.