 |
by
Zsolt Kerekes,
editor - StorageSearch.com
July 9, 2008
I warned you first! - you can see updates
at the end of this article |
One of the things I've
noticed is that the published specs of
flash SSDs change
a lot -from the time products are first
announced, then when
they're being sampled, and later again when they are in volume production. And
flash SSD performance in real applications can be considerably lower than
predicted from published benchmarks due to common"halo" errors which
occur when using test suites designed for
HDDs /
RAM SSDs without
understanding and changing the implicit assumptions which have been built into
the legacy test set ups.
Sometimes the headline numbers get better,
sometimes they get worse. There are many good reasons for this, because unlike
memory devices,
SSDs are
complex systems in which software and
controller hardware
can all play a part in shaping the characteristics of a device. Tweaks in the
controller algorithms can have any of the following major effects:-
If you're a systems integrator / oem who has designed flash SSDs into a
particular application - then it's unlikely that you chose your favored models
at random. It's much more likely that you carefully evaluated products against a
wish list of characteristics which include:-
- performance,
- environmental tolerance,
- power consumption and
- longevity.
These factors play as big a role as the obvious
ones:-
- capacity,
- form factor,
- host interface,
- price and
- security of supply.
But the flash SSD market is very volatile.
How can you be sure that the products which are going into your
production systems are the same (or similar enough) to what you tested?
It's
possible that after you did all your qualification testing that the original
SSD oem did some "improvements" which they may not necessarily tell
you about - because they make the product "better". But what if the
new SSD controller chip
does work faster - but puts more stress on your limited battery budget? Or what
if the SSD oem has switched suppliers of memory or power management chips and
the overall product fails to operate reliably over the full range of
temperature you need?
Worse still - maybe you can't get the
original product at all. To keep your production line going you have to stuff
slots with products that your distributor suggests from companies you've never
heard of before.
Many of these problems have been around in the
electronics industry in past decades. But in
2008,
2009 and maybe
2010, the unique
characteristics of the flash SSD market mean that the risks are a lot worse.
Many SSD oems haven't been in the market very long. But because they
make attractive products you can't afford to ignore them. Although some oems
have been in the industrial or military markets for years - and do test their
products and do inform you when those specs change - when their demand surges
and products go on allocation - you still face the risks of switching suppliers
to guarantee your own product's continuity.
And here's another thing to
worry about.
Can you be sure - for example that the flash SSDs your
buyer has bought at such a good price from an alternate source really are SLC? -
and not MLC or
SLC/hybrids. This is such a new market that you can't be sure that the
supplier's SSD product manager (who may have been in the flash SSD business for
less than 2 months) understands the intricate concerns you have - or what your
questions mean.
One solution to protect yourself - may be to do much
more sample testing of incoming product. Or if your volumes aren't high enough
to justify the capital expense - another option might be to ask your distributor
to do the testing for you.
The flash SSD market opens up tremendous
opportunities for new products and systems which leverage that technology. But
due to the diversity of products in the market and lack of industry standards -
it's got tremendous risks as well.
Paying proper attention to
compliance testing and quality assurance will make the difference between the
market success or failure of many new SSD based products.
...Later:-
Sometimes the "specs" don't tell you anything meaningful about
the product's performance at all - particularly in the case of
write IOPS
tests for flash SSDs.
I've seen several published documents in which
the measurements of such parameters appear to have been done incorrectly.
Because
there isn't widespread market experience of flash SSDs - it's easy to fall into
the trap of running tests which were originally designed for
hard disks or
RAM SSDs and not
realise that some of the inbuilt assumptions or test parameters may be
inappropriate.
It makes little or no difference to a small block size
random write test on an HDD or RAM SSD whether the media being written to
already contains data.
The performance impact of fragmentation on
HDD performance is well known. Flash SSDs don't suffer from a dropoff in
performance due to fragmentation - but there can be a similar performance
droppoff over time due to the lower availability of pre-erased blocks. So when
benchmarking a new flash SSD (which is initially erased) it's important to take
this factor into account.
Some high performance lash SSDs have a
background process which manages and tries to maximize the availability of
pre-erased blocks. How well that works determines the sustained random write
IOPS figure in 24x7 enterprise applications.
That's why when testing a
flash SSD for 24x7 enterprise applications you must make sure that when the
benchmark window begins - the disk is already full. Otherwise you get a "halo"
effect (similar to that caused by cache hits in traditional server benchmark
tests) in which the pool of pre-erased flash storage makes the SSD "appear"
to operate much faster than it will be in a real application after it has been
running for days, weeks or months. |
| . |
SSD
testing and benchmark news
flash SSD performance
characteristics and limitations |
| . |
 |
| . |
SNIA publishes draft SSD performance
testing doc
Editor:- July 12, 2010 - SNIA today announced the
availability of its
Solid State Storage
Performance Test Specification (version 0.9) for public review.
A
typical flash SSD
taken "fresh out of the box" and exposed to a workload, experiences a
brief period of elevated performance, followed by a period of transition to an
eventual performance Steady State. The new SNIA methodology will close the gap
between performance measurements
in the lab and in normal working life and make competitive vendor
comparisons more useful.
This is exactly the same point as I made 2
years earlier in the article here on the left.
"That's why when
testing a flash SSD for 24x7 enterprise applications you must make sure
that when the benchmark window begins - the disk is already full. Otherwise
you get a "halo" effect (similar to that caused by cache hits in
traditional server benchmark tests) in which the pool of pre-erased flash
storage makes the SSD "appear" to operate much faster than it will be
in a real application after it has been running for days, weeks or months." | |
| . |
|
|
| . |
|
|
| . |
|
|
| . |
|
|
| . |
 | |
|
|
| .. |
 |
| .. |
Benchmarking enterprise SSDs - some later articles
In
2009 -
STEC published
this useful white paper -
Benchmarking
Enterprise SSDs (pdf). It reiterates many of the points originally
raised in this StorageSearch.com article (above).
In 2011 -
Running
Consecutive Microsoft Jetstress Performance Tests? - Prepare for Wake Turbulence
written by Allon Cohen
at OCZ warned
about the interdependence of running consecutive tests - and the halo effect of
invisible caches which can make the benchmarks results look much better than
what you would see in real-life.
In 2012 - Modern
Methodologies for Benchmarking Enterprise SSDs - written by Shridar
Subramanian at Virident Systems -
reviewed the problematic history of enterprise SSD performance evaluation which
he said has gone through 4 different phases. The company also this year
published a set of software
benchmarking tools which can be used to simulate enterprise workloads. The
tools can be used to evaluate any brand of SSD - not just those from Virident.
In
March 2013 - factors which influence
and limit flash SSD performance characteristics - which was the home
page blog on StorageSearch.com - provided a single toolkit overview
of many design factors in SSDs which had been discussed by the editor Zsolt Kerekes
in earlier separate articles.
In August 2013 - EDN published a introductory article on
the subject of measuring enterprise SSD performance -
SSD
performance measurement: Best practices - written by Doug Rollins, Senior
Applications Engineer at Micron
- which expounded some of the basic assumptions and jargon.
In August 2013 - LSI published a paper
- Don't
Let Your Favorite Benchmarks Lie to You (pdf) - which showed how and
why many commonly used so-called "benchmark programs" will deliver
conpletely different results for the same SSD - depending on the setup before
the test aka the "preconditioning". |
| . |
|
| . |
|
|
| . |
 |
| . |
|
| . |
| how fast can your
SSD run backwards? |
Editor:- April 20, 2012 - StorageSearch.com today published
a new article which looks at the
11 key symmetries in
SSD design.
SSDs are complex devices and there's a lot of
mysterious behavior which isn't fully revealed by
benchmarks and
vendor's product datasheets and whitepapers. Underlying all the important
aspects of SSD behavior are asymmetries which arise from the intrinsic
technologies and architecture inside the SSD.
Which symmetries are
most important in an SSD?
That depends on your application. But
knowing that these symmetries exist, what they are, and judging how your
selected SSD compares will give you new insights into SSD
performance,
cost and
reliability.
There's
no such thing as - the perfect SSD - existing in the market today - but
the SSD symmetry list helps you to understand where any SSD in any memory
technology stands relative to the ideal. And it explains why deviations from the
ideal can matter. |
 |
This is the most important
article about SSDs that I've written in the past few years. I couldn't have
written it before. I hope you like it.
...click to read
the article | | | |
| . |
 |
| . |
| the 3 fastest flash
PCIe SSDs list (or is it lists?) |
Are you tied up in
knots trying to shortlist flash SSD accelerators ranked according to
published comparative benchmarks?
You know the sort of thing I mean -
where a magazine compares 10 SSDs or a blogger compares 2 SSDs against each
other. It would be nice to have a shortlist so that you don't have to waste too
much of your own valuable time testing unsuitable candidates wouldn't it?
StorageSearch's long running
fastest SSDs directory
typically indicates 1 main product in each form factor category but those
examples may not be compatible with your own ecosystem.
If so a
new article -
the 3 fastest PCIe
SSDs list (or is it really lists?) may help you cut that Gordian
knot. Hmm... you may be thinking that StorageSearch's editor never gives easy
answers to SSD questions if more complicated ones are available.
|
 |
But in this case you'd be
wrong. (I didn't say you'd like the answers, though.) ...read the article | | | |
| . |
| the Problem with
Write IOPS in flash SSDs |
Random "write IOPS"
in many of the fastest
flash SSDs are now similar to "read IOPS" - implying a
performance symmetry which was once believed to be impossible.
So why
are flash SSD IOPS such a poor predictor of application performance? And why
are users still buying
RAM SSDs which cost an
order of magnitude more than SLC? (let alone
MLC) - even
when the IOPS specs look superficially similar?
This article
tells you why the specs got faster - but the applications didn't. |
 |
And why competing SSDs with
apparently identical benchmark results can perform completely differently.
...read the
article | | | |
| . |
|
| |