 |
leading the way to the
new storage frontier | |
... |
..... |
|
some twisty turny stories of
DWPD are
we ready for infinitely faster RAM? the problem with
Write IOPS - in flash SSDs can memory chips be
made in the wrong country? 40 years of thinking
about nvm endurance - selective memories |
|
... |
|
the
importance of being earnest about 3DXPoint
and other SSD
memoryfication heresies |
by
Zsolt Kerekes,
editor - StorageSearch.com
- June 5, 2018 |
When
thinking back about the top level differences between raw data storage
media in the 1990s one easy way to differentiate them was by latency.
So an ordered list from fastest access time to slowest would run something
like:- SRAM, DRAM, flash (those were the main memories in those days) then
winchester disks (the magnetic hard drives / disks we nowadays call HDD),
optical drives and finally tape. And if you were sorting this list according to
the cost per byte stored then no surprise it would be read about the same.
Like all such lists this is a simplication.
Optical drives of
various flavors fought hard to be recognized as viable alternatives which could
sometimes be cheaper or faster than hard drives - which sounded more plausible
when drives were commonly moved from place to place as part of the data recovery
plan in the days before fast internet brought us the cloud. And there were
also many long battles in the early 2000s between
hard drives and tape
to determine which type of magnetic media delivered the lowest cost of archive.
That's the kind of thing which used to be the subject of storage news pages like
this.
The main lesson from being at the sharp end of such discussions
in storage
history is that the tidy ordered
family trees which we see written by the inheritors of such technology
wars do not sufficiently capture the confusion and strength of arguments which
led to them.
That's because other issues which we take for granted
later (like - how fast do we actually need the data? and what's the consequence
of not getting it when we need it?) change over time as part of the evolution of
computing.
And even when everyone is approximately agreed on a general
future direction - such as towards more solid state storage - the differences
in approach can seem like ocean wide chasms.
As part of my reporting
on the new era of SSDs and talking to many evangelists in the SSD market I came
up with a phrase - the
SSD heresies - to describe how fierce these genuinely held
differences in belief could be - even when designers were contemplating
solutions to similar perceived product gap problems.
It's no surprise
then that the enterprise memoryfication market has advocates pulling towards
different priorities as the memory systems IP soil is fertile with
opportunities created by new product gaps created by the mainstream
adoption of SSDs while also benefitting from newly redefined value roles for
older media types too.
The battleground for converts is a proactive
cloud economy which is willing and able to measure and leverage the (lowest or
highest) random asset value of entire populations of drives and will move
towards exploiting valuable incremental differences with the currency of new
software designs.
has the jury reached a verdict on flash tiered
as RAM?
In 2015 - the opening salvo of
SCM DIMM wars
- it seemed plausible that flash tiered as RAM might pose an existential threat
to growth in the DRAM market. The argument at the time offered by companies
like Diablo Technologies
being that a DIMM based solution which could transparently replace 80% or so
of DRAM with tiered flash instead (while delivering similar and sometimes higher
application performance - due to the affordability of bigger "RAM")
would be a market changer because flash had much higher capacity than DRAM at
lower cost. History (so far) shown us that such a transition didn't happen as
predicted - even when the price and availability of DRAM escalated to the pain
levels caused by the
memory
shortages of 2016/7.
Knowing as we now do that users in the
market didn't all rush in droves to adopt the new flash DIMMs tiered as RAM -
the evidence suggests a reinterpretation of the technology is due. And I
think it would go as follows:-
- flash tiered as RAM in DIMM form factors (from a cloud use perspective) is
an incremental rather than a disruptive technology.
The
application benefits (when they occured) were typically a small improvement
(maybe 20 to 30%) compared to tiering flash as RAM in other form factors such
as PCIe SSDs or SATA SSDs. So the risk of switching to single source premium
devices in DIMMs wasn't worthwhile compared to using "generic" SSDs in
cheaper form factors.
- software plays a big part in new hardware adoption.
But proving
that it works takes years.
Memory products interface with more
types of software than storage products. Therefore proving that a new memory
defined software can be trusted requires either a very long time (for general
solutions) or a narrower captive application set.
The software
approval and verification time to reach critical mass for general adoption by
users is longer than the lifetime of a single memory product generation.
That makes it difficult for a single memory product startup with its own unique
software requirements to reach a stable funding level unless it has a cash cow
niche application.
- the RAM market itself is changing - so the ideal direction of change for
users is memory solutions which can deliver applications outcomes in
consistently shorter times while analyzing bigger datasets.
Speed
itself has an intrinsic value. And my blog -
are we ready
for infinitely faster RAM? - explains why there was a limited appetite for
memory accelerators (much faster than DRAM) in the past - and why this appears
to be changing significantly. Intel vs Micron - emerging
differences in assessing the near term strategic importance of recently
commercialized nvms
As I hinted above we shouldn't be surprised
that the SSD design
heresies (what's the best way to design an SSD system - given all the
permutations of memory, interface, software and controller IP) has - like a
rolling stone gathering up sticky new moss - inevitably drifted into the
memory systems design heresies.
On a note of
SSD jargon - re the
evolving change of use in
what's an SSD?
- for me - as everything involving memory systems design nowadays is intricately
linked to controller design and software and architecture - I still think the
term "SSD" covers it. Unless "memoryfication" catches on.
SSD has the vitue of being short. (Rob Peglar, President at Advanced Computation
and Storage LLC on seeing this
said
on linkedin - "I think you just invented a new word" - but
actually - due its convenience as a shortcut spanning a wide slice of memory
architecture trends - I've been
using
it since 2017.)
Going back to emerging differences of opinion re
memoryfication futures - my point is that whatever any particular manufacturer
may tell you about the overwhelming superiority of their own approach to memory
product design (and whether they're a fabless IP startup or a memory T-Rex)
the memory is still just a part of a data system - and in the rich memory sea we
now have - other design approaches to memory soup may do the same job just as
well. (This is exactly the same advice as the first bullet point - don't
believe everything SSD companies tell you about the past, present or future of
the SSD market - in my 2012 article -
Enterprise
SSDs - the Survive and Thrive Guide.)
A recent example is the
differences in the strategic outlook for memories between Intel (infatuated
with 3DXpoint) and Micron (still in love with DRAM) which have been aired
in public statements about policy, investment directions etc.
The blog
-
XP
Dreams: Intel And Micron Diverge by analyst William Tidwell on
SeekingAlpha.com - examines the "stark differences between the two
companies". And among other things he says - "Intel has never ceased
aggressively hyping the technology... projectecting up to $8 billion in XP DIMM
revenue in 2021. Micron, on the other hand, has been almost completely silent,
revealing little other than its branding and its confidence that the new
memory has great potential." ... ...read
the article
This evoked (on linkedin) a reaction by Sang-Yun
Lee the founder of BeSang
(a let's make memory chips better IP company) who asked on
linkedin
- "Why Intel is so obsessed with 3DXP?..."
I replied like
this...
"Agreed that 3DXP doesnt deliver many benefits now. But it
could be used as an incubator technology which enables software developers to
explore new systems level optimisations which play around with closer
integration of processors and low latency big local memory having fewer caches.
New
memory
defined software platforms will create big market opportunities as
significant as Wintel, Unix, http were before. The new software doesnt need to
be Intel memory based (could be done better today with other combinations of
memory). But 3DXP provides ISVs a convenient reference point which is good
enough to make such experiments easy to try. Intel needs just one of these
experiments to succeed to save its processor future.
Big gamble? -
sure. 3DXP is a honey trap for new software. Maybe not the best memory
technology today but the software industry can still remember how sweet Intels
past roadmaps used to taste.
That's despite Intel having been absent
at the conception or birth of the
enterprise PCIe SSD
accelerator market which was the first transformative step in the
memoryfication of the enterprise."
See also:-
who d'you call
for the SSD crystal ball?,
why did we get
into such a mess with SSD software? (2012),
hostage to the
fortunes of SSD (2013),
the enterprise
SSD story - why's the plot so complicated? (2015) | | |
|
... |
|
I love ratios as they have
always provided a simple way to communicate with readers the design choices in
products which tell a lot to other experts in that field. |
re RATIOs in SSD architecture (this
was the home page blog in June 2018) | | |
|
... |
|
|
| |
|
... |
more pages like
this? |
... |
new thinking in SSD controller techniques
reveals "layer aware" properties exploitable in 3D nand flash
Editor:- June 30, 2018 - A new twist using
RAID ideas in
SSD controllers has
surfaced recently in a research paper -
Improving
3D NAND Flash Memory Lifetime by Tolerating Early Retention Loss and Process
Variation (pdf) by Yixin Luo and Saugata Ghose (Carnegie Mellon
University), Yu Cai (SK Hynix), Erich F. Haratsch (Seagate Technology) and
Onur Mutlu (ETH Zürich) - which was presented at the recent
SIGMETRICS conference
June 18-22, 2018.
The authors say that in tall 3D nand (30 layers and upwards) the raw
error rate in blocks in the middle layers are significantly worse (6x) compared
to the top layer. Therefore to enable more
reliable and
faster SSDs using 3D nand for enterprise applications they propose a new type
of RAID which pairs together the best predicted half of a RAID word with the
worst predicted half from another chip in the same SSD.
This new RAID
concept starts to be feasible in a very small population of chips - unlike
traditional 2D nand schemes which need more chips to be installed in the SSD.
The
new RAID is called Layer-Interleaved RAID (LI-RAID) - which the authors
say "improves reliability by changing how pages are grouped under the RAID
error recovery technique. LI-RAID uses information about layer-to-layer process
variation to reduce the likelihood that the RAID recovery of a group could fail
significantly earlier during the flash lifetime than the recovery of other
groups." ...
read the article (pdf)
Editor's comments:- the new RAID is
just one of many gems in this research paper. Others being the discovery that
remanence in 3D nand includes a significant short term charge loss (in the first
few minutes after writes), and also that an endurance based characterization of
a small part of each chip can be used to predict an optimized layer dependent
threshold read voltage for all the layers in the chip. I've discussed the
significance of adding the concept of "layers" to "number of raw
chips" to the thinking in SSD controller design in my recent
home
page blog.
despite over $1 billion / quarter in storage revenue
Micron remains a confident DRAM company at the core
Editor:-
June 24, 2018 - Micron
disclosed some useful metrics and opinions about the SSD and memory market -
related to its experience in the quarter ended May 31, 2018- in its recent
earnings
conference call (transcript on seekingalpha.com).
- 71% of Micron's revenue in the quarter came from DRAM. DRAM revenue grew
56% yoy.
- Storage Business Unit revenue (mostly SSDs and managed nand) was $1.1
billion.
- 3DXPoint sales were "very little".
Micron seems
confident that demand for memory products will continue to grow faster than in
past
memory business cycles due to new usage factors (the memoryfication of
everything factor).
Sanjay Mehrotra,
President
& CEO Micron - said - "...AI driven, AI training driven compute
workloads have like 2x the amount of DRAM and 6x the amount of SSD. So, these
trends are really secular in nature. We are at the very, very beginning. And
same way in mobile in terms of our low power DRAM where we have very strong
position, DRAM contents requirements are going, continuing, to
increase." ...read
the article
comparing new embedded memory characteristics
free
overview from Objective Analysis
Editor:- June 20, 2018 -
New
Memories for Efficient Computing (pdf) - is a free white paper by Jim Handy - Founder - Objective Analysis which
summarizes and compares the technology status (cell size, R/W, endurance,
retention, temperature and manufacturabilty) of all the main embedded memory
types which are competing for design wins with
DRAM, SRAM and
flash in the
memoryfication market today.
Among other things Jim notes this...
"Another important consideration is the scalability of the technology.
Certain emerging memory technologies, particu-larly FRAM and PCM, have proven
challenging to scale. FRAM has not been successfully scaled below 90nm and PCM's
"On" resistance increases as the cell size decreases, making the
technology more noise sensitive as the process shrinks, although PCM researchers
successfully successfully developed a 5nm cell over a decade ago.." ...
read the article (pdf)
Editor's comments:- Throw away your
dusty old text books and scrub the
old web
bookmarks. Jim Handy's free 2018 memory selector guide lists all the
memories whose names you can't quite remember.
Churchill said his
staff kept mixing up Iran and Iraq in WW2 so he insisted on them being called
Persia and Iraq in memos.
Likewise you may find FRAM, ReRAM MRAM
NRAM, PCM etc fading in and out of memorability in your organic brainspace
having waited nearly 20
years for them to become really emerged - which they finally did in 2017
thanks in part to the
price of flash and
DRAM having moved backwards in time and upwards in $/bit by 2-4 years
compared to earlier expectations as a result of business decisions by big
memory suppliers during the self inflicted
memory
shortages.
new report lists malware attack vectors for memory in
processors
Editor:- June 14, 2018 -
Security
Issues for Processors with Memory is a new report (90 pages, $975) by Memory Strategies International
with ramifications (I had to use that word) for the memoryfication of processors
market.
The report includes a comprehensive list of the dimensions in
which security can be attacked and outline of design mitigation directions.
Among other things the scope includes:- "Issues of volatile
vs. non-volatile memory for cache and main memory involve consideration of
security hazards. Cryptography in multicore coprocessor systems are an issue.
Security of data on network buses is critical for military, medical and
financial systems with remedies suggested for replay attacks..." ...see more
about this report
See also:-
is data
remanence in persistent memory a new risk factor?,
optimizing
CPUs for use with SSD architectures,
SSD security,
PIM, in-situ processing
and other SSD jargon
in-memory cache as a cloud service - beta from GridGain
Editor:-
June 12, 2018 - GridGain
Systems today
announced
the beta release and free trials of GridGain
Cloud - an in-memory cache-as-a-service that allows users to rapidly deploy
a distributed in-memory cache and access it using ANSI-99 SQL, key-value or REST
APIs. The result is in-memory computing performance in the cloud, which can be
massively scaled out and can be deployed in minutes for caching applications.
See also:- SSD
empowerment in cloud
DRAM costs lifted server revenues in Q1 - says Dell'Oro
Editor:-
June 12, 2018 - The top 4 Cloud Service Providers - Google, Amazon, Microsoft,
and Facebook consumed most of the 920,000 white box servers shipped in Q1 2018
according to a
report
by Dell'Oro Group who also
attribute higher average server selling prices to the DRAM price factor.
Editor's
comments:- in recent
years CSPs
and other internet scale actors have switched roles from being early adopters
of SSD technologies (which they had been since the early 2000s) and - impatient
at waiting for big brand datasaurs to understand their requirements -
these big users have been at the forefront of designing new architectures to
increase the efficiency of storage and also push the boundaries of memory
systems performance.
See also:-
who does storage market
research?
dogs can sniff out USB drives and phones
Editor:-
June 11, 2018 - Police dogs have been trained to find hidden flash drives -
according to a recent
story
in the Verge.
See also:-
consumer SSD guides,
data recovery,
fast erase SSDs
Memblaze launches new PPR enhanced 2.5" NVMe SSDs
Editor:-
June 8, 2018 - it seems like a long time since I heard from Memblaze. Today they
announced
new dual port products aimed at the long established 2.5" PCIe SSD
market. (This form factor first headlined in SSD news pages and related events
in 2012).
Like many past products in this category from other
manufacturers - a key feature is the balance between raw data access performance
and power consumption the "performance-to-power ratio".
Excelero accelerates Ceph
Editor:- June 6, 2018 -
What would you do if you could find a way to reduce the latency of fault
tolerant distributed storage on commodity hardware by an order of magnitude?
Keep quiet about it and don't tell your competitors - would be a
common answer.
Instead one of Excelero's
customers was happy to share their finding re
Ceph platforms in a joint
press
release today.
After researching NVMe-oF options the customer
(Germany based) teuto.net - tried iSCSI appliance-based storage solutions, then
vetoed them as limiting seamless growth and increasing costs as well as
vetoing Dell EMC ScaleIO, which didn't support NVMe-oF and was costly.
Using Excelero's software enabled a 10x reduction in Ceph
latency.
Flexxon's industrial SD cards show sophistication of a market
once seen as simple
Editor:- June 2, 2018 - Flexxon recently
announced
a new family of industrial SD cards for use in automotive and medical markets.
Interesting to see that the range internal flash memories within this
single (superficially fairly simple standard) family includes:- SLC, pSLC (2D
and 3D), MLC, and TLC (which is 3D of course).
This shows how
sophisticated and nuanced the embedded market has become at analyzing value and
selecting the operating parameters for different use cases.
see
also:- tell the buyer
there's no such thing as a simple standard industrial SSD | |
. |
|
. |
 |
. |

| |
.. |
|
|
.. |
|
"GridGain is to
memory
defined software - what
Texas Memory Systems
was to SSD accelerators on the SAN, and
Fusion-io was to
server based SSD accelerators - a long term innovator and pioneer. So when you
see educational articles like this you know there's real authority." |
Zsolt Kerekes
- editor - StorageSearch.com
- commenting on linkedin (June 21, 2018) about a new article -
Memory-Centric
Architectures: What's Next for In-Memory Computing written by Abe Kleinfeld,
President & CEO at GridGain
Systems - and published on The New Stack.
Editor's
comments:- among other things I liked (apart from the whole article) were the
examples of customer metrics using IMC.
For example:- Abe mentioned
this...
"Workday uses its in-memory computing platform to process
about 189 million transactions per day, with a peak of about 289 million
per day. For comparison, Twitter does about 500 million tweets per day."
...read
the article
What I like about Abe Kleinfeld's market wake up
call articles about IMC is it shows the proven power of using this type of
technolgy.
In the early days of the mission critical SSD market
customers who got massive computing gains from using SSD acceleration preferred
to keep quiet about it - which could be frustrating for the pioneering vendors
who had educated them, analyzed their
bottlenecks
and installed their impossibly faster systems. Users didn't want competitors
(or enemies) to learn what had been done.
See also:-
SSD education,
why use SSDs?
(2003 to 2005) | | |
|
.. |
|
 |
|
.. |
|
Memory
Defined Software - a new market in the making |
There's a new software idea that's been
experimented on in the AI skunkworks in the cloud and as patentable secret
enhancements in next generation embedded processor designs. This new concept and
exciting new market (for the VCs reading this) will be more significant than a
new OS and will mark a break in the way that the enterprise thinks about
software.
You had had plenty of warning about the new chips but
memoryfication doesn't stop with faster storage. The idea didn't have a name
when I started writing about it. But what it should be called is obvious.
Memory
Defined Software doesn't have to work at being backwards compatible
because the legacy storage industry will import and export to it if they want
to play in data's future.
See more about this in my blog -
introducing -
Memory Defined Software. (Sometimes you can change the world with software
which breaks all the rules - if you can find the right platform to run it on.) ...read the
article | | |
|
.. |
|
|
|
.. |
|
To be or
not to be?
Mice or mouseless? - that is the question. |
Editor:- June 18, 2018 - If you trawl the
archives of Shakespeare's scribblings (even the fake plays and musicals) I'm
pretty sure he didn't have anything to say about the
role of mice as icons
on a data storage web site. Although he did have a lot to say about life,
changes, revolutions, dynasties and successions.
So why the question?
- mice or mouseless?
StorageSearch.com
is for sale.
I'm retiring - and I'm looking for a new owner for the
site who will value my readers.
I will stop updating StorageSearch.com on December 25, 2018.
And I'll freeze the site after that date - pending the formal closing of the
sales process.Mice or mouseless will be one of the
branding questions
to be determined by the new owner in 2019 - whoever they may be.
As
part of this plan I have also told advertisers that the web ad model (which has
worked so well since 1996) is now EOL. This means the site will be offered for
sale without any ties. ...read more
about this | | |
|
.. |
|
If you could go back in
time and take with you - in the DeLorean - a factory full of modern
memory chips and SSDs (along with backwards compatible adapters) what real
impact would that have? |
are we ready for
infinitely faster RAM?
| | |
|
.. |
|
after AFAs
- what's next? |
Throughout the
history of
the data storage market we've always expected the capacity of enterprise user
memory systems to be much smaller than the capacity of all the other attached
storage in the same data processing environment.
A
classic blog on StorageSearch.com
- cloud
adapted memory systems - asks (among other things) if this will always be
true.
Like many of you - I've been thinking a lot about the
evolution of memory technologies and data architectures in the past year. I
wasn't sure when would be the best time to share my thoughts about this one.
But the timing seems right now. ...read the
article | | |
|
.. |
|
why should you measure the
performance of a RAMdisk emulation running in "flash virtualized as RAM"
hardware |
here's the
reason | | |
|
.. |
|
|
|
.. |
|
| |