Understand and compare
Claude Instant 1.2
vs.
Claude 3.5 Sonnet
Try
Podial
Turn your documents into engaging podcast discussions.
Overview
Claude Instant 1.2 was released
11 months before
Claude 3.5 Sonnet.
Claude Instant 1.2
|
Claude 3.5 Sonnet
|
|
---|---|---|
Provider
The entity that provides this model.
|
Anthropic
|
Anthropic
|
Input Context Window
The number of tokens supported by the input context window.
|
100K
tokens
|
200K
tokens
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
Not specified.
|
4,096
tokens
|
Release Date
When the model was first released.
|
2023-08-09
|
2024-06-20
|
Knowledge Cutoff
Limit on the knowledge base used by the model.
|
Early 2023
|
April 2024
|
Open Source
|
|
|
API Providers
The providers that offer this model. (This is not an exhaustive list.)
|
|
|
Pricing
Claude Instant 1.2 is
roughly 3.7x cheaper compared
to Claude 3.5 Sonnet for input tokens and
roughly 6.3x cheaper
for output tokens.
Claude Instant 1.2
|
Claude 3.5 Sonnet
|
|
---|---|---|
Input
Cost of input data provided to the model.
|
$0.80
per million tokens
|
$3.00
per million tokens
|
Output
Cost of output tokens generated by the model.
|
$2.40
per million tokens
|
$15.00
per million tokens
|
Benchmarks
Compare relevant benchmarks between Claude Instant 1.2
and Claude 3.5 Sonnet.
Claude Instant 1.2
|
Claude 3.5 Sonnet
|
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
73.4
(5-shot)
|
90.4
(5-shot CoT)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
Benchmark not available.
|
68.3
(0-shot CoT)
|
HellaSwag
A challenging sentence completion benchmark.
|
Benchmark not available.
|
Benchmark not available.
|
GSM8K
Grade-school math problems benchmark.
|
Benchmark not available.
|
Benchmark not available.
|
HumanEval
A benchmark to measure functional correctness for synthesizing programs from docstrings.
|
Benchmark not available.
|
Benchmark not available.
|
MATH
Benchmark performance on Math problems ranging across 5 levels of difficulty and 7 sub-disciplines.
|
Benchmark not available.
|
71.1
(0-shot)
|
Measure & Improve LLM
Product Performance.
Get Started