|
|
.. |
|
|
.. |
endurance PCIe SSDs SSD controllers the fastest SSDs the business of SSD
customization miscellaneous
consequences of the 2017 memory shortages |
|
.. |
|
So called "emerging
memories" - some of which had gotten to be teenagers before they quit their
dark dens and emerged as data industry citizens - have this year (2017) been at
the heart of claims by systems oriented memoryfication startups that they could
change the world of storage and memory arrays arrays as much as SSDs changed the
landscape before. |
trajectory of SSD
market's onward rebound after 2017 | | |
|
.. |
|
|
|
.. |
|
The memory chip count
ceiling around which the SSD controller IP is optimized - predetermines the
efficiency of achieving system-wide goals like cost, performance and
reliability. |
Size matters in SSD
controller architecture | | |
|
.. |
|
NVDIMM
tiered memory evolution |
Editor:- February 7, 2017, 2017 - "Cache
based NVDIMM architectures will be the predominant interface overtaking NVMe
within the next 5-10 years in the race for performance" - is the concluding
message of a recent presentation by Doug Fink , Director of
Product Marketing - Xitore
-
Next
Generation Persistent Memory Evolution - beyond the NVDIMM-N (pdf)
Among other things Doug's slides echo a theme discussed
before - which is that
new memory media (PCM, ReRAM, 3DXpoint) will have to compete in price and
performance terms with flash based alternatives and this will slow down the
adoption of the alt nvms.
Editor's
comments:- Xitore (like others in the
SCM DIMM wars
market) is working on NVDIMM form factor based solutions and in this and
an earlier
paper
they provide a useful summary of the classifications in this module
category.
However, the wider market picture is that the retiring and
retiering DRAM story cuts
across form factors with many other permutations of feasible implementation
possible.
So - whereas the NVDIMM is a seductively convenient form
factor for systems architects to think around - the competitive market for big
memory will use anything from SSDs on a chip upto (and including) populations of
entire fast rackmount SSD boxes as part of such tiered solutions - if the
economics, scale, interface fabric and
software make the
cost, performance and time to market sums emerge in a viable zone of business
risk and doability.
SSD
news storage
market research RAM
ain't what it used to be | | |
|
.. |
|
|
|
.. |
|
"At the technology
level, the systems we are building through continued evolution are not advancing
fast enough to keep up with new workloads and use cases. The reality is that
the machines we have today were architected 5 years ago, and ML/DL/AI uses in
business are just coming to light,
so the
industry missed a need." |
From the blog -
Envisioning
Memory Centric Architecture by Robert Hormuth,
VP/Fellow and Server CTO - Dell
EMC (January 26, 2017) | | |
|
.. |
|
|
|
.. |
|
|
|
.. |
|
Some SSD vendors do get to
a threshold revenue level - despite these online deficiencies because their
sales people work hard and their VCs are rich.
But most SSD companies
will fail to get to the next level of sustainable business growth - which is
where - the customer finds you - and not the other way around - unless they
invest more in their online SSD communications assets. |
what do I need to
know about any new rackmount SSD? | | |
|
.. |
|
|
| |
after AFAs - what's the next box?
cloud adapted memory systems
by
Zsolt Kerekes,
editor - StorageSearch.com
- January 24, 2017
|
Last year I had the
idea of writing an April 1st blog on the theme of cloud adapted memory
systems.
The core idea was to have been a spoof press release
about a rackmount memory system for enterprise users which can connect into
the fabric of their applications which has been optimized to support cloud
services as the next slowest external level of latency.
The
product architecture was a multi-tiered memory systems box in which all the
integrated memory resources could be dynamically configured to behave like
RAM or SSD storage or persistent memory - depending on the vintage and
preferences of the user applications software.
An underlying
assumption in my spoof article was that as you move up the latency ladder and
move into the slower domains beyond this box - the next level is also likely to
be another memory systems box or the cloud.
From the perspective of
grounded networked user systems (by which I mean user systems which do not
form a native part of the public cloud infrastructure) the cloud (in all its
forms - public, private or micro-tiered and local) has replaced the hard drive
array and tape library as
being the slowest and cheapest data storage devices which your data software
might encounter. Everything else is memory.
In this scenario there's
no role for user software which was written around a hard drive access
model. Indeed as long term readers already know the mission of identifying
and removing
all such "HDD driven" (prefetch, cache, and pack it all up) embedded
software activities has - for the past 10 years - been a secret
SSD software weapon
used by many leading companies to improve the speed and utilization of
their integrated solid state storage systems.
Although, for economic
reasons, users might still encounter hard drives in the
cloud, or a micro
cloud, or a hybrid
storage appliance, nevertheless from the perspective of planning new
systems for users - the key strategic device for enterprise data performance is
the memory system.
raw chip memory... how much as SSD? how much
as memory?
This raises the question:- what proportion of the raw
semiconductor memory capacity ought to be usable as storage (SSD) or
usable as memory (RAM -
as in "random access memory" which operates with the software
like DRAM but which could be implemented by other technologies).
Ratios
of one thing to another have often been useful indicators of changing
expectations in the storage market - because they are simple to grasp - even
when the associated technologies are not.
Despite the attachment
constraints of legacy interface types (same chip datapath, DDR-X, PCIe, Gen-Z,
SAS, IB, GbE, photonic etc ) I anticipate that emulating SSD arrays and / or
big RAM (these two choices determine the "personality" of the
installed memory resources in a way we can understand today) could one day
(with appropriate datapaths) be as easy to adjust as the ratios of
flash memory to
hard drives which we
saw being promised in a clever "try before you buy" customer
experience and business development tool in 2014 -
the flash juice
strength slider mix - from Tegile Systems - which
they used to woo impecuniously minded hybrid array inclined users closer
towards the benefits of more expensive to buy all flash arrays.
The
more I thought about it the more I realized that as an April 1st type of
article this cloud adapted memory systems blog just wouldn't work. It was
already too close to the kind of products we're already seeing in the market.
But
as a thought provoking feature it got me thinking about some related issues. See
if any of these strike a chord with you.
expectations for memory
storage systems
In the past we've always expected the data
capacity of memory systems (mainly
DRAM) to be much smaller
than the capacity of all the other attached storage in the same data
processing environment. The rationale for this being the economies (dare I say
cost) of access
time, data density and electrical power - which were traditionally implemented
by a many different types of storage media (solid state, magnetic and
optical) each having
their own unique characteristics.
In a modern data system - even one
which is entirely solid state - the arguments for tiered products are the
same as they have
always been because "faster" usually means "runs hotter".
But this new world of "memory systems everywhere" opens the
possibility that random read access times (across a significant range of
applications data) is similar even if the random write time (including
verify and
play it again
Sam) aspects of that data cycle remains variable.
But would
enterprise systems be more efficient (and run faster and at lower cost) if all
the software was rewritten to assume that memory was large (and could be
persistent) whereas storage (initially supported to emulate legacy
applications and grow revenue for such systems) was small?
For a
longer discussion of such issues see -
where are we
heading with memory intensive systems?
frames of relativity -
where is the cloud?
Earlier in this blog when describing the
relative access times of the memory systems box compared to data in the cloud
I was assuming that the frame of reference was from the perspective of the
user's system (which is located outside the cloud). That's why I said the cloud
would replace the hard drive as the slowest virtual peripheral. Of course if
you're thinking about systems architecture from the angle of designing
infrastructure components in the cloud - then that "slowness" isn't
generally true. And you will still be designing some boxes which support
physical hard drives (until a cheaper option comes along or until you can
monetize the seldom accessed data in a better way).
software's role
in acceleration - worth the wait
As with the SSD market in the
past so too with the memory systems market there will be bigger and faster
adoption of new technologies when there is more software speaking the same
language. Having products which interoperate with legacy software is business
plan "A" and will fund some interesting business stories. But
getting to the next stage of the memory systems market where the installed
memory base (of randomly accessible memory) begins to creep up to the size of
the installed capacity of SSD storage - will require a lot of new software which
can leverage the memory assets with less backward glances.
You might
say we've already got SSD
software solutions which can repurpose flash into useful roles as big
memory. Pioneering products were:-
So why do we need any new hardware
at all?
We've got the new memories coming anyway.
Some of
them will stick in easy to identify places. Others have yet to find sustainable
new roles. What are those roles? I'll be dealing with those issues in a future
blog here - which I've tentatively called - the survivor's guide to all
semiconductor memory and the diminishing role of form factors.
See you
then.
PS - Although I didn't publish my spoof article about "cloud
adaptive memory" (which was to have been its original title back in early
2016 - I did spend a lot of time thinking about the consequences of those ideas.
And they clearly influenced the choice of the serious articles and news coverage
which I did apply myself to as you may have seen.
To steer your way to
future markets sometimes you have to consider ideas which at first seem like a
ludicrous stretch from reality and follow them through for a while as if they
were real before you can recognize that the truths which emerge from
analyzing such notions can be useful.
Previous examples of spoof
articles which were useful forerunners of reality discussed issues like why
SSDs would replace HDDs (as cheap bulk storage) even if HDDs were free (in the
article
towards the petabyte
SSD), and the complexities of signal processing in flash level
discrimination (and data itegrity) - which we now call
adaptive DSP
(here's a link to the 2008 spoof
article).
For
the philosophy behind this approach see my article -
Boundaries
Analysis in SSD Market Forecasting .
PPS - In 2014 I
discussed the idea of unified storage (SAN and NAS) being the old fashioned "gentlemen's
club" way in an interview with Frankie
Roohparvar (who at that time was CEO of
Skyera and is now
Chief Strategy Officer at Xitore).
I mischievously sounded him out on my expectation of being able
to add in the capability of emulating big persistent memory into the
new dynasty
unified solid state data box feature set. For that story see -
Skyera's new skyHawk
FS in archived
news.
a
storage architecture guide
are you ready to
rethink RAM?
playing the enterprise
SSD box riddle game
hidden and
missing segments of opportunities for rackmount flash
why do you need
a supported RAM disk emulation in your new "flash as RAM" solution? | | |
.. |
|
.. |
|
| |
.... |
If you're one of those who
has suffered from the memory shortages it may seem unfair that despite their
miscalculations and over optimimism the very companies which caused the
shortages of memory and higher prices - the major manufacturers of nand flash
and DRAM - have been among the greatest beneficiaries. |
consequences
of the 2017 memory shortages | | |
.. |
|
.. |
|
.. |
|
.. |
|
.. |
|
. |
|
.... |
|
.... |
|
.... |
|
.. |
|
.... |
Many of the important and
sometimes mysterious behavioral aspects of SSDs which predetermine their
application limitations and usable market roles can only be understood when you
look at how well the designer has dealt with managing the symmetries and
asymmetries which are implicit in the underlying technologies which are
contained within the SSD. |
how fast can your SSD
run backwards? | | |
.... |
|
.. |
|
.. |
Can you trust market
reports and the handed down wisdom from analysts, bloggers and so-called
industry experts?
|
heck no! -
here's why | | |
.. |
|
.... |
|
.... |
|
.. |
|
.. |
|
.. |
|
.. |
If you spend a lot of your
time analyzing the performance characteristics and limitations of flash SSDs -
this article will help you to easily predict the characteristics of any new SSDs
you encounter - by leveraging the knowledge you already have. |
flash SSD performance
characteristics and limitations | | |
.. |
The memory chip count
ceiling around which the SSD controller IP is optimized - predetermines the
efficiency of achieving system-wide goals like cost, performance and
reliability. |
size matters in
SSD controller architecture | | |
.. |
|
| |