Understand and compare
GPT-4 32K 0613
vs.
Llama 3 70B Instruct
Overview
GPT-4 32K 0613 was released
10 months before
Llama 3 70B Instruct.
GPT-4 32K 0613
|
Llama 3 70B Instruct
|
|
---|---|---|
Provider
The entity that provides this model.
|
OpenAI
|
Meta
|
Input Context Window
The number of tokens supported by the input context window.
|
32.8K
tokens
|
8,000
tokens
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
Not specified.
|
2,048
tokens
|
Release Date
When the model was first released.
|
2023-06-13
|
2024-04-18
|
Pricing
GPT-4 32K 0613
|
Llama 3 70B Instruct
|
|
---|---|---|
Input
Cost of input data provided to the model.
|
$60.00
per million tokens
|
Pricing not available.
|
Output
Cost of output tokens generated by the model.
|
$120.00
per million tokens
|
Pricing not available.
|
Benchmarks
Compare relevant benchmarks between GPT-4 32K 0613
and Llama 3 70B Instruct.
GPT-4 32K 0613
|
Llama 3 70B Instruct
|
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
Benchmark not available.
|
82.0
(5-shot)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
Benchmark not available.
|
Benchmark not available.
|
HellaSwag
A challenging sentence completion benchmark.
|
Benchmark not available.
|
Benchmark not available.
|
GPT-4 32K 0613, developed by OpenAI, features a context window of 32768 tokens. The model costs 6.0 cents per thousand tokens for input and 12.0 cents per thousand tokens for output. It was released on June 13, 2023.
Llama 3 70B Instruct, developed by Meta, features a context window of 8000 tokens. The model was released on April 18, 2024, and achieved a score of 82.0 in the MMLU benchmark under a 5-shot scenario.
Measure & Improve LLM
Product Performance.
Get Started