Understand and compare
Mistral 7B Instruct
vs.
Mistral 8x7B Instruct
Try
Podial
Turn your documents into engaging podcast discussions.
Overview
Mistral 7B Instruct was released
3 months before
Mistral 8x7B Instruct.
Mistral 7B Instruct
|
Mistral 8x7B Instruct
|
|
---|---|---|
Provider
The entity that provides this model.
|
Mistral
|
Mistral
|
Input Context Window
The number of tokens supported by the input context window.
|
32K
tokens
|
32K
tokens
|
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
|
8,192
tokens
|
4,096
tokens
|
Release Date
When the model was first released.
|
2023-09-27
|
2023-12-11
|
Knowledge Cutoff
Limit on the knowledge base used by the model.
|
Unknown
|
Unknown
|
Open Source
|
|
|
API Providers
The providers that offer this model. (This is not an exhaustive list.)
|
|
|
Pricing
Mistral 7B Instruct is
roughly 2.8x cheaper compared
to Mistral 8x7B Instruct for input and output tokens.
Mistral 7B Instruct
|
Mistral 8x7B Instruct
|
|
---|---|---|
Input
Cost of input data provided to the model.
|
$0.25
per million tokens
|
$0.70
per million tokens
|
Output
Cost of output tokens generated by the model.
|
$0.25
per million tokens
|
$0.70
per million tokens
|
Benchmarks
Compare relevant benchmarks between Mistral 7B Instruct
and Mistral 8x7B Instruct.
Mistral 7B Instruct
|
Mistral 8x7B Instruct
|
|
---|---|---|
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
|
60.1
(5-shot)
|
70.6
(5-shot)
|
MMMU
A wide ranging multi-discipline and multimodal benchmark.
|
Benchmark not available.
|
Benchmark not available.
|
HellaSwag
A challenging sentence completion benchmark.
|
Benchmark not available.
|
Benchmark not available.
|
GSM8K
Grade-school math problems benchmark.
|
Benchmark not available.
|
Benchmark not available.
|
HumanEval
A benchmark to measure functional correctness for synthesizing programs from docstrings.
|
Benchmark not available.
|
Benchmark not available.
|
MATH
Benchmark performance on Math problems ranging across 5 levels of difficulty and 7 sub-disciplines.
|
Benchmark not available.
|
Benchmark not available.
|
Mistral 7B Instruct, developed by Mistral, features a large context window of 32000 tokens. The model is priced at 0.025 cents per thousand tokens for both input and output. It was released on September 27, 2023, and achieved a score of 60.1 in the MMLU benchmark under a 5-shot scenario.
Mistral 8x7B Instruct, developed by Mistral, features a context window of 32000 tokens. The model costs 0.07 cents per thousand tokens for both input and output. It was released on December 11, 2023, and achieved a score of 70.6 in the MMLU benchmark in a 5-shot scenario.
Measure & Improve LLM
Product Performance.
Get Started