Understand and compare
Gemini Ultra
vs.
Claude 3 Opus
Overview
Gemini Ultra
|
Claude 3 Opus
|
|
---|---|---|
Provider
The entity that provides this model.
|
Google
|
Anthropic
|
Input Context Window
The number of tokens supported by the input context window.
|
32.8K
characters
|
200K
tokens
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
8,192
characters
|
4,096
tokens
|
Release Date
When the model was first released.
|
Unknown
|
2024-03-04
|
Pricing
Gemini Ultra
|
Claude 3 Opus
|
|
---|---|---|
Input
Cost of input data provided to the model.
|
Pricing not available.
|
$15.00
per million tokens
|
Output
Cost of output tokens generated by the model.
|
Pricing not available.
|
$75.00
per million tokens
|
Benchmarks
Compare relevant benchmarks between Gemini Ultra
and Claude 3 Opus.
Gemini Ultra
|
Claude 3 Opus
|
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
83.7
(5-shot)
|
88.2
(5-shot CoT)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
59.4
(0-shot pass@1)
|
59.4
|
HellaSwag
A challenging sentence completion benchmark.
|
Benchmark not available.
|
95.4
(10-shot)
|
Gemini Ultra, developed by Google, features a large context window of 32768 tokens. The model has excelled in benchmarks like MMMU with a score of 59.4 in a 0-shot pass@1 scenario and MMLU with a score of 83.7 in a 5-shot scenario.
Claude 3 Opus, developed by Anthropic, features a context window of 200,000 tokens. The model costs 1.5 cents per thousand tokens for input and 7.5 cents per thousand tokens for output. It was released on March 4, 2024, and has achieved impressive scores in benchmarks like HellaSwag with a score of 95.4 in a 10-shot scenario, MMLU with a score of 88.2 in a 5-shot CoT scenario, and MMMU with a score of 59.4.
Measure & Improve LLM
Product Performance.
Get Started