Understand and compare
GPT-4
vs.
Gemini Ultra
Overview
![]() |
![]() |
|
---|---|---|
Provider
The entity that provides this model.
|
![]() |
![]() |
Input Context Window
The number of tokens supported by the input context window.
|
8,192
tokens
|
32.8K
characters
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
8,192
tokens
|
8,192
characters
|
Release Date
When the model was first released.
|
2023-03-14
|
Unknown
|
Pricing
![]() |
![]() |
|
---|---|---|
Input
Cost of input data provided to the model.
|
$30.00
per million tokens
|
Pricing not available.
|
Output
Cost of output tokens generated by the model.
|
$60.00
per million tokens
|
Pricing not available.
|
Benchmarks
Compare relevant benchmarks between GPT-4
and Gemini Ultra.
![]() |
![]() |
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
86.4
(5-shot)
|
83.7
(5-shot)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
34.9
|
59.4
(0-shot pass@1)
|
HellaSwag
A challenging sentence completion benchmark.
|
95.3
(10-shot)
|
Benchmark not available.
|
![](https://with.context.ai/assets/openai-5e3235388de8d8803af80dc53ea68559e0a4f698ae58b3eb9ea8048515f1bac1.png)
Gemini Ultra, developed by Google, features a large context window of 32768 tokens. The model has excelled in benchmarks like MMMU with a score of 59.4 in a 0-shot pass@1 scenario and MMLU with a score of 83.7 in a 5-shot scenario.
![](https://with.context.ai/assets/google-c8f988d7a45b564da5965132d7479ae30327702e3e9fbc3df8f03c2842e0834e.png)
Compare more models
Measure & Improve LLM
Product Performance.
Get Started