Compare to
Overview
Gemini Pro
Gemini Ultra
Provider
The entity that provides this model.
Google
Google
Input Context Window
The number of tokens supported by the input context window.
32.8K
characters
32.8K
characters
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
8,192
characters
8,192
characters
Release Date
When the model was first released.
2023-12-13
Unknown
Knowledge Cutoff
Limit on the knowledge base used by the model.
Unknown
Unknown
Open Source
API Providers
The providers that offer this model. (This is not an exhaustive list.)
Pricing
Gemini Pro
Gemini Ultra
Input
Cost of input data provided to the model.
Pricing not available.
Pricing not available.
Output
Cost of output tokens generated by the model.
Pricing not available.
Pricing not available.
Benchmarks
Compare relevant benchmarks between Gemini Pro and Gemini Ultra.
Gemini Pro
Gemini Ultra
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
71.8
(5-shot)
83.7
(5-shot)
MMMU
A wide ranging multi-discipline and multimodal benchmark.
47.9
(pass@1)
59.4
(0-shot pass@1)
HellaSwag
A challenging sentence completion benchmark.
84.7
(10-shot)
Benchmark not available.
GSM8K
Grade-school math problems benchmark.
77.9
(11-shot)
88.9
(11-shot)
HumanEval
A benchmark to measure functional correctness for synthesizing programs from docstrings.
67.7
(0-shot)
74.4
(0-shot)
MATH
Benchmark performance on Math problems ranging across 5 levels of difficulty and 7 sub-disciplines.
32.6
(4-shot Minerva Prompt)
53.2
(4-shot Minerva Prompt)
Measure & Improve LLM Product Performance.
Get Started