the fastest SSDs
the business of SSD
CPUs in SSDs...
whatever made a good SSD (and controller) in the past will no longer
be adequate in the future
|(Editor:- January 26, 2017 -
This is part of what I said to a reader this week about optimizing CPUs
for use in SSDs.)|
The characteristics of
used within COTS SSDs varies widely.
One direction of influence comes from the anticipated
this is the aspect which is easiest to understand.
So that results
in different preferences for enterprise SSDs which have high solo performance
Mangstor) compared to
2.5" SSDs deployed in arrays.
And power consumption can be a key factor in
controllers which are optimized for low power consumption are very different in
their choice of CPU and necessarily flash algorithms too because they can't
depend on the type of
which makes enterprise endurance management code easier to design.
unfortunately I think that analyzing what happened in the past in the
SSD controller / SSD
processor market isnt a reliable predictor for future
An influence which has been trickling down from lessons learned in
the array market is the powerful system level benefit of passing intelligent
control from a viewpoint which is outside the ken of the SSD controller located
in the flash storage form factor.
And partly due to that applications awareness and likely to tear
apart many controller business plans is the contradictory requirements between
custom and standard products. (Described in more detail in my
2015 SSD ideas
blog (SSD directional push-me pull-yous).
And - as
always with the SSD
market different companies can take very different approaches to how they pick
the permutations which deliver their ideal SSD.
Added to that I think a new emerging factor in memory systems will
be whether the CPUs are able to deliver applications level benefits byintegrating
nvm or persistent memory within the same die as the processor itself.
Thats partly a process challenge but also a massive architectural
and business gamble.
In the past it has been obvious that some SSDs
incorporated nvm registers or other small persistent storage memory (apart from
the external flash) to deliver
data integrity features which didn't need capacitor holdup.
What is less clear is the direction of travel with tiered memory on
When it comes to chip space budget - is it worth trading cores to
enable bigger (slower) persistent memory upstream of conventional cache?
This was already
when external memory was assumed to be only DRAM and being fed from HDD
storage buckets. The new latency buckets of SSD storage and bigger tiered
semiconductor main memory change the latency of the data bucket chain and the
ability to perform in-situ memory processing may change CPU architecture too.
For some applications that might be a good trade. Better integration at lower
latency with the CPU and the memory system. And merging of CPU and SSD
functionality. But this would be a risky experiment for a component vendor
who doesn't have a systems level marketing channel to sell the enhanced merged
The only clear thing is that whatever made a good SSD in
the past will no longer adequate in the future.
More flexibility will be key.
It's not just the CPU making
the SSD work better. The SSD makes the CPU work better too.
equivalence and SSD and memory systems equivalence
ideas - but the scope for innovative improvement is still massive.
after AFAs - what's the next box?
cloud adapted memory systems
editor - StorageSearch.com
- January 24, 2017
|Last year I had the
idea of writing an April 1st blog on the theme of cloud adapted memory
The core idea was to have been a spoof press release
about a rackmount memory system for enterprise users which can connect into
the fabric of their applications which has been optimized to support cloud
services as the next slowest external level of latency.
product architecture was a multi-tiered memory systems box in which all the
integrated memory resources could be dynamically configured to behave like
RAM or SSD storage or persistent memory - depending on the vintage and
preferences of the user applications software.
assumption in my spoof article was that as you move up the latency ladder and
move into the slower domains beyond this box - the next level is also likely to
be another memory systems box or the cloud.
From the perspective of
grounded networked user systems (by which I mean user systems which do not
form a native part of the public cloud infrastructure) the cloud (in all its
forms - public, private or micro-tiered and local) has replaced the hard drive
array and tape library as
being the slowest and cheapest data storage devices which your data software
might encounter. Everything else is memory.
In this scenario there's
no role for user software which was written around a hard drive access
model. Indeed as long term readers already know the mission of identifying
all such "HDD driven" (prefetch, cache, and pack it all up) embedded
software activities has - for the past 10 years - been a secret
SSD software weapon
used by many leading companies to improve the speed and utilization of
their integrated solid state storage systems.
Although, for economic
reasons, users might still encounter hard drives in the
cloud, or a micro
cloud, or a hybrid
storage appliance, nevertheless from the perspective of planning new
systems for users - the key strategic device for enterprise data performance is
the memory system.
raw chip memory... how much as SSD? how much
This raises the question:- what proportion of the raw
semiconductor memory capacity ought to be usable as storage (SSD) or
usable as memory (RAM -
as in "random access memory" which operates with the software
like DRAM but which could be implemented by other technologies).
of one thing to another have often been useful indicators of changing
expectations in the storage market - because they are simple to grasp - even
when the associated technologies are not.
Despite the attachment
constraints of legacy interface types (same chip datapath, DDR-X, PCIe, SAS,
IB, GbE, photonic etc ) I anticipate that emulating SSD arrays and / or big
RAM (these two choices determine the "personality" of the installed
memory resources in a way we can understand today) could one day (with
appropriate datapaths) be as easy to adjust as the ratios of
flash memory to
hard drives which we
saw being promised in a clever "try before you buy" customer
experience and business development tool in 2014 -
the flash juice
strength slider mix - from Tegile Systems - which
they used to woo impecuniously minded hybrid array inclined users closer
towards the benefits of more expensive to buy all flash arrays.
more I thought about it the more I realized that as an April 1st type of
article this cloud adapted memory systems blog just wouldn't work. It was
already too close to the kind of products we're already seeing in the market.
as a thought provoking feature it got me thinking about some related issues. See
if any of these strike a chord with you.
expectations for memory
In the past we've always expected the data
capacity of memory systems (mainly
DRAM) to be much smaller
than the capacity of all the other attached storage in the same data
processing environment. The rationale for this being the economies (dare I say
cost) of access
time, data density and electrical power - which were traditionally implemented
by a many different types of storage media (solid state, magnetic and
optical) each having
their own unique characteristics.
In a modern data system - even one
which is entirely solid state - the arguments for tiered products are the
same as they have
always been because "faster" usually means "runs hotter".
But this new world of "memory systems everywhere" opens the
possibility that random read access times (across a significant range of
applications data) is similar even if the random write time (including
play it again
Sam) aspects of that data cycle remains variable.
enterprise systems be more efficient (and run faster and at lower cost) if all
the software was rewritten to assume that memory was large (and could be
persistent) whereas storage (initially supported to emulate legacy
applications and grow revenue for such systems) was small?
longer discussion of such issues see -
where are we
heading with memory intensive systems?
frames of relativity -
where is the cloud?
Earlier in this blog when describing the
relative access times of the memory systems box compared to data in the cloud
I was assuming that the frame of reference was from the perspective of the
user's system (which is located outside the cloud). That's why I said the cloud
would replace the hard drive as the slowest virtual peripheral. Of course if
you're thinking about systems architecture from the angle of designing
infrastructure components in the cloud - then that "slowness" isn't
generally true. And you will still be designing some boxes which support
physical hard drives (until a cheaper option comes along or until you can
monetize the seldom accessed data in a better way).
in acceleration - worth the wait
As with the SSD market in the
past so too with the memory systems market there will be bigger and faster
adoption of new technologies when there is more software speaking the same
language. Having products which interoperate with legacy software is business
plan "A" and will fund some interesting business stories. But
getting to the next stage of the memory systems market where the installed
memory base (of randomly accessible memory) begins to creep up to the size of
the installed capacity of SSD storage - will require a lot of new software which
can leverage the memory assets with less backward glances.
say we've already got software solutions which can repurpose flash into useful
roles as big memory so why do we need any new hardware at all?
got the new memories coming anyway. Some of them will stick in easy to identify
places. Others have yet to find sustainable new roles. What are those roles?
I'll be dealing with those issues in my next blog here - which I've tentatively
called - the survivor's guide to all semiconductor memory and the diminishing
role of form factors.
See you then.
PS - Although I
didn't publish my spoof article about "cloud adaptive memory" (which
was to have been its original title back in early 2016 - I did spend a lot of
time thinking about the consequences of those ideas. And they clearly influenced
the choice of the serious articles and news coverage which I did apply myself to
as you may have seen.
To steer your way to future markets sometimes
you have to consider ideas which at first seem like a ludicrous stretch from
reality and follow them through for a while as if they were real before you
can recognize that the truths which emerge from analyzing such notions can be
Previous examples of spoof articles which were useful
forerunners of reality discussed issues like why SSDs would replace HDDs (as
cheap bulk storage) even if HDDs were free (in the article
towards the petabyte
SSD), and the complexities of signal processing in flash level
discrimination (and data itegrity) - which we now call
(here's a link to the 2008 spoof
the philosophy behind this approach see my article -
Analysis in SSD Market Forecasting .
PPS - In 2014 I
discussed the idea of unified storage (SAN and NAS) being the old fashioned "gentlemen's
club" way in an interview with Frankie
Roohparvar (who at that time was CEO of
Skyera and is now
Chief Strategy Officer at Xitore).
I mischievously sounded him out on my expectation of being able
to add in the capability of emulating big persistent memory into the
unified solid state data box feature set. For that story see -
Skyera's new skyHawk
FS in archived
storage architecture guide
are you ready to
playing the enterprise
SSD box riddle game
|Many of the important and
sometimes mysterious behavioral aspects of SSDs which predetermine their
application limitations and usable market roles can only be understood when you
look at how well the designer has dealt with managing the symmetries and
asymmetries which are implicit in the underlying technologies which are
contained within the SSD.|
|how fast can your SSD
|Can you trust market
reports and the handed down wisdom from analysts, bloggers and so-called
|heck no! -
|If you spend a lot of your
time analyzing the performance characteristics and limitations of flash SSDs -
this article will help you to easily predict the characteristics of any new SSDs
you encounter - by leveraging the knowledge you already have.|
|flash SSD performance
characteristics and limitations|
|The memory chip count
ceiling around which the SSD controller IP is optimized - predetermines the
efficiency of achieving system-wide goals like cost, performance and
|size matters in
SSD controller architecture|
| Are you whiteboarding
alternative server based SSD / SCM / SDS architectures? It's messy keeping
track of those different options isn't it? Take a look at an easy to remember
hex based shorthand which can aptly describe any SSD accelerated server blade.|
|what's in a number? -