Understand and compare
Llama 3 8B Instruct
vs.
Gemini Pro
Overview
Llama 3 8B Instruct was released
4 months after
Gemini Pro.
![]() |
![]() |
|
---|---|---|
Provider
The entity that provides this model.
|
![]() |
![]() |
Input Context Window
The number of tokens supported by the input context window.
|
8,000
tokens
|
32.8K
characters
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
2,048
tokens
|
8,192
characters
|
Release Date
When the model was first released.
|
2024-04-18
|
2023-12-13
|
Pricing
![]() |
![]() |
|
---|---|---|
Input
Cost of input data provided to the model.
|
Pricing not available.
|
Pricing not available.
|
Output
Cost of output tokens generated by the model.
|
Pricing not available.
|
Pricing not available.
|
Benchmarks
Compare relevant benchmarks between Llama 3 8B Instruct
and Gemini Pro.
![]() |
![]() |
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
68.4
(5-shot)
|
71.8
(5-shot)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
Benchmark not available.
|
47.9
(pass@1)
|
HellaSwag
A challenging sentence completion benchmark.
|
Benchmark not available.
|
Benchmark not available.
|
![](https://with.context.ai/assets/meta-b0d3356199f47d298a09385682430689ec6f3da855e3be6d323d4f11b7283d6b.png)
Gemini Pro, developed by Google, features a context window of 32768 tokens. The model costs 0.0125 cents per thousand tokens for input and 0.0375 cents per thousand tokens for output. It was released on December 13, 2023, and has achieved a score of 47.9 in the MMMU benchmark with a "pass@1" caveat and a score of 71.8 in the MMLU benchmark in a 5-shot scenario.
![](https://with.context.ai/assets/google-c8f988d7a45b564da5965132d7479ae30327702e3e9fbc3df8f03c2842e0834e.png)
Measure & Improve LLM
Product Performance.
Get Started