Understand and compare
Claude 3 Opus
vs.
Gemini 1.5 Pro
Overview
Claude 3 Opus was released
18 days after
Gemini 1.5 Pro.
Claude 3 Opus
|
Gemini 1.5 Pro
|
|
---|---|---|
Provider
The entity that provides this model.
|
Anthropic
|
Google
|
Input Context Window
The number of tokens supported by the input context window.
|
200K
tokens
|
1M
tokens
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
4,096
tokens
|
8,192
tokens
|
Release Date
When the model was first released.
|
2024-03-04
|
2024-02-15
|
Pricing
Claude 3 Opus is
roughly 2.1x more expensive compared
to Gemini 1.5 Pro for input tokens and
roughly 3.6x more expensive
for output tokens.
Claude 3 Opus
|
Gemini 1.5 Pro
|
|
---|---|---|
Input
Cost of input data provided to the model.
|
$15.00
per million tokens
|
$7.00
per million tokens
|
Output
Cost of output tokens generated by the model.
|
$75.00
per million tokens
|
$21.00
per million tokens
|
Benchmarks
Compare relevant benchmarks between Claude 3 Opus
and Gemini 1.5 Pro.
Claude 3 Opus
|
Gemini 1.5 Pro
|
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
88.2
(5-shot CoT)
|
81.9
(5-shot)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
59.4
|
58.5
(0-shot)
|
HellaSwag
A challenging sentence completion benchmark.
|
95.4
(10-shot)
|
Benchmark not available.
|
Claude 3 Opus, developed by Anthropic, features a context window of 200,000 tokens. The model costs 1.5 cents per thousand tokens for input and 7.5 cents per thousand tokens for output. It was released on March 4, 2024, and has achieved impressive scores in benchmarks like HellaSwag with a score of 95.4 in a 10-shot scenario, MMLU with a score of 88.2 in a 5-shot CoT scenario, and MMMU with a score of 59.4.
Gemini 1.5 Pro by Google features a vast context window of 1,000,000 tokens. The model is priced at 0.7 cents per thousand tokens for input and 2.1 cents per thousand tokens for output. It was launched on February 15, 2024. In benchmark tests, it achieved a score of 58.5 in MMMU with a 0-shot scenario and 81.9 in MMLU with a 5-shot scenario.
Measure & Improve LLM
Product Performance.
Get Started