← Back to all comparisons
Side-by-side
Text embedding 3 large vs Llama 3.3 70B (Groq)
OpenAI vs Groq — head-to-head specs and pricing.
OpenAI
Text embedding 3 large
High-quality OpenAI embeddings.
3072-dim embeddings for retrieval, clustering, classification. Best-in-class semantic quality.
Groq
Llama 3.3 70B (Groq)
Llama at Groq speed.
Meta's Llama 3.3 70B served on Groq's LPUs — hundreds of tokens/sec.
| Spec | Text embedding 3 large | Llama 3.3 70B (Groq) |
|---|---|---|
| Provider | OpenAI | Groq |
| Input cost / 1M tokens | — | $0.71 |
| Output cost / 1M tokens | — | $0.95 |
| Context window | — | 128,000 tokens |
| Max output tokens | — | — |
| Streaming support | — | ✓ |
| Tool calling | — | ✓ |
| Vision input | — | — |
| JSON mode | — | — |
| Status | ACTIVE | ACTIVE |
Use both via OneAPIKey — one key, one bill
OneAPIKey aggregates OpenAI, Groq, and 10 more providers behind a single API. Smart Routing automatically picks the best model per request — or you choose explicitly.