Compare to
Understand and compare Gemini 1.5 Pro vs. o1 Preview 2024-09-12
Overview
Gemini 1.5 Pro was released 7 months before o1 Preview 2024-09-12.
Gemini 1.5 Pro
o1 Preview 2024-09-12
Provider
The entity that provides this model.
Google
OpenAI
Input Context Window
The number of tokens supported by the input context window.
1M
tokens
128K
tokens
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
8,192
tokens
32.8K
tokens
Release Date
When the model was first released.
2024-02-15
2024-09-12
Knowledge Cutoff
Limit on the knowledge base used by the model.
November 2023
October 2023
Open Source
API Providers
The providers that offer this model. (This is not an exhaustive list.)
Pricing
Gemini 1.5 Pro is roughly 2.1x cheaper compared to o1 Preview 2024-09-12 for input tokens and roughly 2.9x cheaper for output tokens.
Gemini 1.5 Pro
o1 Preview 2024-09-12
Input
Cost of input data provided to the model.
$7.00
per million tokens
$15.00
per million tokens
Output
Cost of output tokens generated by the model.
$21.00
per million tokens
$60.00
per million tokens
Benchmarks
Compare relevant benchmarks between Gemini 1.5 Pro and o1 Preview 2024-09-12.
Gemini 1.5 Pro
o1 Preview 2024-09-12
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
81.9
(5-shot)
Benchmark not available.
MMMU
A wide ranging multi-discipline and multimodal benchmark.
58.5
(0-shot)
Benchmark not available.
HellaSwag
A challenging sentence completion benchmark.
93.3
(10-shot)
Benchmark not available.
GSM8K
Grade-school math problems benchmark.
90.8
(11-shot)
Benchmark not available.
HumanEval
A benchmark to measure functional correctness for synthesizing programs from docstrings.
84.1
(0-shot)
Benchmark not available.
MATH
Benchmark performance on Math problems ranging across 5 levels of difficulty and 7 sub-disciplines.
67.7
(4-shot Minerva Prompt)
Benchmark not available.
Measure & Improve LLM Product Performance.
Get Started