Understand and compare
GPT-4 Turbo 1106
vs.
Claude 3 Haiku
Overview
GPT-4 Turbo 1106 was released
4 months before
Claude 3 Haiku.
![]() |
![]() |
|
---|---|---|
Provider
The entity that provides this model.
|
![]() |
![]() |
Input Context Window
The number of tokens supported by the input context window.
|
128K
tokens
|
200K
tokens
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
4,096
tokens
|
4,096
tokens
|
Release Date
When the model was first released.
|
2023-11-06
|
2024-03-13
|
Pricing
GPT-4 Turbo 1106 is
roughly 40x more expensive compared
to Claude 3 Haiku for input tokens and
roughly 24x more expensive
for output tokens.
![]() |
![]() |
|
---|---|---|
Input
Cost of input data provided to the model.
|
$10.00
per million tokens
|
$0.25
per million tokens
|
Output
Cost of output tokens generated by the model.
|
$30.00
per million tokens
|
$1.25
per million tokens
|
Benchmarks
Compare relevant benchmarks between GPT-4 Turbo 1106
and Claude 3 Haiku.
![]() |
![]() |
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
Benchmark not available.
|
76.7
(5-shot CoT)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
Benchmark not available.
|
50.2
|
HellaSwag
A challenging sentence completion benchmark.
|
Benchmark not available.
|
85.9
(10-shot)
|
![](https://with.context.ai/assets/openai-5e3235388de8d8803af80dc53ea68559e0a4f698ae58b3eb9ea8048515f1bac1.png)
Claude 3 Haiku, developed by Anthropic, features a context window of 200,000 tokens. The model costs 0.025 cents per thousand tokens for input and 0.125 cents per thousand tokens for output. It was released on March 13, 2024. In benchmarks, it achieved a score of 50.2 in MMMU, 85.9 in HellaSwag in a 10-shot scenario, and 76.7 in MMLU in a 5-shot CoT scenario.
![](https://with.context.ai/assets/anthropic-80870c3e4c4b59465f693246e7ac24d65d785686adf82cc682d05d7bcb81796d.png)
Measure & Improve LLM
Product Performance.
Get Started