. |
"The
SSD market isn't a democracy. All SSDs are not created equal.
Not even when they have exactly the same memory chips inside." | |
|
by
Zsolt Kerekes,
editor - November 26,
2010 |
This is a non technical introduction to
the thinking behind bad block management in flash SSDs - which is just one one
of the many vital functions performed by an
SSD controller.
A
lot of reader emails I get show this concept is not widely understood - even
by those who are experienced with
hard disk and
other storage
technologies.
I've learned about this by talking to people in the
industry. The exact details and algorithms used are proprietary secrets and
sometimes covered by patents. But the principles are the same in all SSDs.
In flash devices 2% to 10% of blocks may be error prone or unusable when
the device is new.
And after that data in "good blocks"
can later be corrupted by charge leakage,
disturbance from
writes in adjacent parts of the chip, wear-out and
variability in the
tolerances of the R/W process in MLC SSDs.
Living with these
realities and producing reliable
storage devices is part of the black magic of the SSD controller - which
uses architecture, data
integrity, endurance
management and othe tricks to ensure
reliability.
The
explanation below is
based
on an email I sent to a reader in November 2010.
Controllers remap
every time they write to a block - because they try to even out the total writes
done on any physical block.
When they get unacceptable errors from a block it's assigned to a dead
pool.
For every type of flash chip and each process stepping and
each manufacturer - the SSD designer needs to know the percentage of dead
blocks which they are likely to get during the life of the SSD. (Typically using
a design life of 5 years.)
Successfully working around these defects
also depends on the strength of error coding - and how the blocks are mapped
on the solid state disk.
Using a
RAID aproach and a
population of thousands of flash chips in a rackmount SSD like those made by
Violin - gives a higher
percentage of blocks which can fail and still leave the SSD usable - because
data is striped across blocks.
On the other hand - in consumer SSDs with less chips and lower
capacity - the striping options are more limited.
The design
process results in a bad block budget - for example 4% to 10% - of dead blocks
which the SSD can find and yet still operate. Bad blocks are mapped as "do
not use". And known good blocks substituted instead. This budget (which is
due to media defects) is in addition to the budget which is calculated for
attrition of blocks due to wear-out.
The percentage of bad blocks
which can be accomodated is a product marketing decision. The spare blocks
come from over provisioning inside the SSD and using capacity which is
invisible to the host.
If the bad blocks exceed the budgeted number for any reason- the
SSD fails.
In the SSD market one of the reasons that some SSDs may
have failed early was that SSD designers - who knew too little about what they
were doing - used flash chips from other sources than those qualified by the
controller manufacturer. That threw away the built in safety margin. Another
problem can arise when the original flash chip manufacturer changes something in
their process - which doesn't affect the parameters they are testing for - but
does change the way the devices look from the data integrity point of view. That
too - can tip the balance outside the margins designed into the controller.
Another risk of SSD failures comes from virgin SSD designers who don't
know enough about the variance of parameters in the flash chip population. If
they choose the bad block budget numbers based on too small a sample - and
don't allow enough margin - the controller runs out of spare blocks to assign
and dies.
SSDs are only as good as the people who design them and make them.
There can be orders of magnitude difference in operational outcomes - even when
different SSD makers are using exactly the same memory chips. |
.. |
 |
References
Most of what I know about this topic comes
from dialogs with SSD companies over a period of many years (2003 to 2013).
Special thanks to many individuals in these companies:-
Adtron,
M-Systems,
SandForce,
STEC,
Texas Memory Systems,
Violin Memory and
SiliconSystems
For
those who want to read more about bad blocks in flash SSDs - try these
articles.
A
detailed overview of flash management techniques (pdf) - give an overview of
flash media management and how good data integrity is the result of many
different overlapping processes.
Bad
Block Management in NAND Flash Memories (pdf) - give you some idea of the
internal support in flash chips for data integrity. This is the lowest level in
a data integrity
heirarchy which is mostly managed by the SSD controller. | |
. |
Surviving SSD
sudden power loss |
Why should you care
what happens in an SSD when the power goes down?
This important design
feature - which barely rates a mention in most SSD datasheets and press releases
- has a strong impact on
SSD data integrity
and operational
reliability.
This article will help you understand why some
SSDs which (work perfectly well in one type of application) might fail in
others... even when the changes in the operational environment appear to be
negligible. |
| | |
. |
. |

| |
... |
|