Understand and compare
GPT-4 Turbo 0125
vs.
Claude 3 Opus
Try
Podial
Turn your documents into engaging podcast discussions.
Overview
GPT-4 Turbo 0125 was released
about 1 month before
Claude 3 Opus.
GPT-4 Turbo 0125
|
Claude 3 Opus
|
|
---|---|---|
Provider
The entity that provides this model.
|
OpenAI
|
Anthropic
|
Input Context Window
The number of tokens supported by the input context window.
|
128K
tokens
|
200K
tokens
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
4,096
tokens
|
4,096
tokens
|
Release Date
When the model was first released.
|
2024-01-25
|
2024-03-04
|
Knowledge Cutoff
Limit on the knowledge base used by the model.
|
December 2023
|
August 2023
|
Open Source
|
|
|
API Providers
The providers that offer this model. (This is not an exhaustive list.)
|
|
|
Pricing
GPT-4 Turbo 0125 is
roughly 33.3% cheaper compared
to Claude 3 Opus for input tokens and
roughly 2.5x cheaper
for output tokens.
GPT-4 Turbo 0125
|
Claude 3 Opus
|
|
---|---|---|
Input
Cost of input data provided to the model.
|
$10.00
per million tokens
|
$15.00
per million tokens
|
Output
Cost of output tokens generated by the model.
|
$30.00
per million tokens
|
$75.00
per million tokens
|
Benchmarks
Compare relevant benchmarks between GPT-4 Turbo 0125
and Claude 3 Opus.
GPT-4 Turbo 0125
|
Claude 3 Opus
|
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
85.4
(5-shot)
|
88.2
(5-shot CoT)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
Benchmark not available.
|
59.4
|
HellaSwag
A challenging sentence completion benchmark.
|
Benchmark not available.
|
95.4
(10-shot)
|
GSM8K
Grade-school math problems benchmark.
|
Benchmark not available.
|
Benchmark not available.
|
HumanEval
A benchmark to measure functional correctness for synthesizing programs from docstrings.
|
86.6
(0-shot)
|
Benchmark not available.
|
MATH
Benchmark performance on Math problems ranging across 5 levels of difficulty and 7 sub-disciplines.
|
64.5
(0-shot)
|
Benchmark not available.
|
GPT-4 Turbo 0125, developed by OpenAI, features an impressive context window of 128,000 tokens. The model costs 1.0 cent per thousand tokens for input and 3.0 cents per thousand tokens for output. It is set to be released on January 25, 2024.
Claude 3 Opus, developed by Anthropic, features a context window of 200,000 tokens. The model costs 1.5 cents per thousand tokens for input and 7.5 cents per thousand tokens for output. It was released on March 4, 2024, and has achieved impressive scores in benchmarks like HellaSwag with a score of 95.4 in a 10-shot scenario, MMLU with a score of 88.2 in a 5-shot CoT scenario, and MMMU with a score of 59.4.
Measure & Improve LLM
Product Performance.
Get Started