SSD market isn't a democracy. All SSDs are not created equal.
Not even when they have exactly the same memory chips inside."
editor - November 26,
|This is a non technical introduction to
the thinking behind bad block management in flash SSDs - which is just one one
of the many vital functions performed by an
lot of reader emails I get show this concept is not widely understood - even
by those who are experienced with
hard disk and
I've learned about this by talking to people in the
industry. The exact details and algorithms used are proprietary secrets and
sometimes covered by patents. But the principles are the same in all SSDs.
In flash devices 2% to 10% of blocks may be error prone or unusable when
the device is new.
And after that data in "good blocks"
can later be corrupted by charge leakage,
writes in adjacent parts of the chip, wear-out and
variability in the
tolerances of the R/W process in MLC SSDs.
Living with these
realities and producing reliable
storage devices is part of the black magic of the SSD controller - which
uses architecture, data
management and othe tricks to ensure
explanation below is
on an email I sent to a reader in November 2010.
every time they write to a block - because they try to even out the total writes
done on any physical block.
When they get unacceptable errors from a block it's assigned to a dead
For every type of flash chip and each process stepping and
each manufacturer - the SSD designer needs to know the percentage of dead
blocks which they are likely to get during the life of the SSD. (Typically using
a design life of 5 years.)
Successfully working around these defects
also depends on the strength of error coding - and how the blocks are mapped
on the solid state disk.
RAID aproach and a
population of thousands of flash chips in a rackmount SSD like those made by
Violin - gives a higher
percentage of blocks which can fail and still leave the SSD usable - because
data is striped across blocks.
On the other hand - in consumer SSDs with less chips and lower
capacity - the striping options are more limited.
process results in a bad block budget - for example 4% to 10% - of dead blocks
which the SSD can find and yet still operate. Bad blocks are mapped as "do
not use". And known good blocks substituted instead. This budget (which is
due to media defects) is in addition to the budget which is calculated for
attrition of blocks due to wear-out.
The percentage of bad blocks
which can be accomodated is a product marketing decision. The spare blocks
come from over provisioning inside the SSD and using capacity which is
invisible to the host.
If the bad blocks exceed the budgeted number for any reason- the
In the SSD market one of the reasons that some SSDs may
have failed early was that SSD designers - who knew too little about what they
were doing - used flash chips from other sources than those qualified by the
controller manufacturer. That threw away the built in safety margin. Another
problem can arise when the original flash chip manufacturer changes something in
their process - which doesn't affect the parameters they are testing for - but
does change the way the devices look from the data integrity point of view. That
too - can tip the balance outside the margins designed into the controller.
Another risk of SSD failures comes from virgin SSD designers who don't
know enough about the variance of parameters in the flash chip population. If
they choose the bad block budget numbers based on too small a sample - and
don't allow enough margin - the controller runs out of spare blocks to assign
SSDs are only as good as the people who design them and make them.
There can be orders of magnitude difference in operational outcomes - even when
different SSD makers are using exactly the same memory chips.
Most of what I know about this topic comes from
dialogs with SSD companies over a period of many years (2003 to 2013). Special
thanks to many individuals in these companies:-
Texas Memory Systems,
Violin Memory and
WD Solid State
For those who want to read more about bad blocks in flash
SSDs - try these articles.
overview of flash management techniques (pdf) - give an overview of flash
media management and how good data integrity is the result of many different
Increasing Flash SSD
Reliability - although this artice is mainly about endurance - it gives a
good insight into how block quality checking and remapping occur as part of the
continuous work done by the SSD controller.
Block Management in NAND Flash Memories (pdf) - give you some idea of the
internal support in flash chips for data integrity. This is the lowest level in
a data integrity
heirarchy which is mostly managed by the SSD controller.
sudden power loss|
|Why should you care
what happens in an SSD when the power goes down? |
This important design
feature - which barely rates a mention in most SSD datasheets and press releases
- has a strong impact on
SSD data integrity
This article will help you understand why some
SSDs which (work perfectly well in one type of application) might fail in
others... even when the changes in the operational environment appear to be
|nice and naughty
flash - SLC, MLC, eMLC & TLC in enterprise SSDs|
flash management IP (including DSP) for SSDs
Challenges in flash SSD Design
capacity - the iceberg syndrome
sudden power loss
changed in SSD year 2014?
SSD's past phantom demons
SSD reliability papers
|Due to the undesirability
(from an industrial chipmakers point of view) of waiting 7 to 10 elapsed years
to collect the real-time reliability evidence which would convince industrial
users it was safe to design these new products into their systems - by which
time they would be EOL and long forgotten - the semiconductor industry evolved
theoretical methods to satisfy customers in such markets much sooner. These
centered around accelerated life tests...|
endurance compliant mSATA SSDs|
optimizes SSD architecture to cope with flash plane failure|
|Editor:- May 26, 2011 - a new
slant on SSD
reliability architectures is revealed today by Texas Memory Systems
who explained how their patented Variable Stripe RAID technology is used in
their recently launched PCIe SSD card - the
does a 1 month burn-in of flash memory prior to shipment. (One of the
reasons cited for its use
of SLC rather than
Through its QA processes the company has acquired real-world failure data
for several generations of flash
memory and used this to model and characterize the failure modes which
occur in high IOPs SSDs.
Most enterprise SSDs use a simple type of
classic RAID which groups
flash media into "stripes" containing equal numbers of chips. RAID
technology can reconstruct data from a failed Flash chip. Typically, when a chip
or part of a chip fails, the RAID algorithm uses a spare chip as a virtual
replacement for the broken chip. But once the SSD is out of spare chips, it
needs to be replaced.
VSR technology allows the number of chips to
vary among stripes, so bad chips can simply be bypassed using a smaller stripe
size. Additionally, VSR provides greater stripe size granularity, so a stripe
could exclude a small part of a chip rather than having to exclude an
entire chip if only part of it failed - "plane error". With VSR
technology, TMS says its SSD products will continue operating longer in the
Dan Scheel, President of Texas Memory Systems explained why their
technology increases reliability.
"...Consider a hypothetical
SSD made up of 25 individual flash chips. If a plane failure occurs that
disables 1/8 of one chip, a traditional RAID system would remove a full 4% of
the raw Flash capacity. TMS VSR technology bypasses the failure and only reduces
the raw flash capacity by 0.5%, an 8x improvement. TMS tests show that plane
failures are the 2nd most common kind of flash device failures, so it is very
important to be able to handle them without wasting working flash."
comments:- by wasting less capacity than simpler RAID solutions - more
usable capacity remains available for traditional
||This extra capacity comes
from the over provisioning budget which figure varies according to each SSD
design (as discussed in my recent
flash iceberg syndrome article) but
is 30% for TMS.|
|a book - Inside NAND Flash|
|Editor:- November 17, 2010 - Forward Insights
(an SSD analyst
company) is one of the contributers to a new book called -
NAND Flash Memories.|
The publishers say that
SSD designers must
understand flash technology in order to exploit its benefits and countermeasure
its weaknesses. The new book is a comprehensive guide to the NAND world -
from circuits design (analog and digital) to
|SSD Data Recovery
|It's hard enough understanding the
design of any single SSD. And there are so
many different designs
in the market. |
Have you ever wondered what it looks like at the
other end of the SSD supply chain - when a user has a damaged SSD which
contains priceless data with no usable backup?