Understand and compare
o1 Mini 2024-09-12
vs.
Gemini 1.5 Pro
Try
Podial
Turn your documents into engaging podcast discussions.
Overview
o1 Mini 2024-09-12 was released
7 months after
Gemini 1.5 Pro.
o1 Mini 2024-09-12
|
Gemini 1.5 Pro
|
|
---|---|---|
Provider
The entity that provides this model.
|
OpenAI
|
Google
|
Input Context Window
The number of tokens supported by the input context window.
|
128K
tokens
|
1M
tokens
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
65.5K
tokens
|
8,192
tokens
|
Release Date
When the model was first released.
|
2024-09-12
|
2024-02-15
|
Knowledge Cutoff
Limit on the knowledge base used by the model.
|
October 2023
|
November 2023
|
Open Source
|
|
|
API Providers
The providers that offer this model. (This is not an exhaustive list.)
|
|
|
Pricing
o1 Mini 2024-09-12 is
roughly 2.3x cheaper compared
to Gemini 1.5 Pro for input tokens and
roughly 42.9% cheaper
for output tokens.
o1 Mini 2024-09-12
|
Gemini 1.5 Pro
|
|
---|---|---|
Input
Cost of input data provided to the model.
|
$3.00
per million tokens
|
$7.00
per million tokens
|
Output
Cost of output tokens generated by the model.
|
$12.00
per million tokens
|
$21.00
per million tokens
|
Benchmarks
Compare relevant benchmarks between o1 Mini 2024-09-12
and Gemini 1.5 Pro.
o1 Mini 2024-09-12
|
Gemini 1.5 Pro
|
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
Benchmark not available.
|
81.9
(5-shot)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
Benchmark not available.
|
58.5
(0-shot)
|
HellaSwag
A challenging sentence completion benchmark.
|
Benchmark not available.
|
93.3
(10-shot)
|
GSM8K
Grade-school math problems benchmark.
|
Benchmark not available.
|
90.8
(11-shot)
|
HumanEval
A benchmark to measure functional correctness for synthesizing programs from docstrings.
|
Benchmark not available.
|
84.1
(0-shot)
|
MATH
Benchmark performance on Math problems ranging across 5 levels of difficulty and 7 sub-disciplines.
|
Benchmark not available.
|
67.7
(4-shot Minerva Prompt)
|
Measure & Improve LLM
Product Performance.
Get Started