PCIe SSDs versus memory channel SSDs
are these really different markets? (slight return)
by
Zsolt Kerekes,
editor - May 2, 2014 |
18 months ago I
published a blog called -
Boundaries
Analysis in SSD Market Forecasting - which is a technique I've been using
for many years to help me judge how new and future SSD technologies can settle
into new market uses - and whether they might create new sustainable markets
which fit in alongside other competitive solutions.
And that
co-existence type of eureka moment - tells you a different kind of story
compared to the simpler "this will replace that - because it's cheaper and
better" kind of wishful thinking which accompanies many new technology
announcements.
When I wrote the original boundaries analysis article -
the examples I used in it were already quite old (to me) - and I was just using
them to demonstrate how you can analyze the business viability of new SSD market
segments many years in the future without having to know in detail how any
the specific technology problems will be solved.
(Assuming that they
could be solved if it was deemed to be worthwhile - is enough in this
context.)
Anyway I returned to that subject in a conversation I had
this week about memory channel SSDs versus PCIe SSDs.
In some respects
I was returning to some of the core judgments which I published a year ago in
my blog
Memory
Channel Storage SSDs will the new concept fly? - should you book a seat yet?
But
that was written before there were any real products based on this technology -
so some of my comments were made on guesses.
Since then - we've seen
the technology being shipped and adopted by
IBM (who worked with MCS's
creator Diablo for years
and helped shape the architecture and wrote a
book
about its uses.
And there's an excellent
faqs page which tells
us almost everything we might wish to know about the technical boundaries of
this product.
What about the marketing boundaries?
Here are
some question from that angle.
- if memory channel SSDs cost nothing - would there still be a market for
PCIe SSDs?
and conversely
- if PCIe SSDs cost nothing - would there still be a market for MCS?
Let's
discuss these hypothetical questions in order.
If memory channel SSDs
cost nothing - would there still be a market for PCIe SSDs?
Yes - for 2
reasons.
- PCIe SSDs fit in more readily to a PCIe fabric connection - from card to
card and also between racks. This can be used to provide high availability or
scalability or both.
MCS is currently limited to short line lengths.
That's part of what endows it with better latency - but is a restriction too.
So
there would still be some uses for PCIe SSDs if MCS was free (or cheaper).
- And another reason is that PCIe SSDs can be designed to be hot pluggable -
whereas MCS (like DRAM DIMMs) doesn't support hot plugability.
So if
you have a storage rack which looks like an arrays of
2.5" PCIe SSDs
- you can incrementally add new modules or replace faulty modules while the
system remains powered.
Those incremental RAS features - at the drive
level - are critical for some markets. So that's another reason why PCIe SSDs
would still have a big market even if memory channel SSDs cost the same or less
(for equivalent capacity). And how about my other question.
If
PCIe SSDs cost nothing - would there still be a market for MCS?
Yes.
The
main reason is latency.
In currently marketed products - MCS offers
typically 5x to 10x lower latency (which can be relied on) than typical PCIe
SSDs.
So if ISVs know they can rely on a different class of
characteristics - and if they think the market opportunity is big enough to make
it worth their while - they can differentiate their product offerings by
offering new or differentiated apps functionality in MCS configured servers
- which the ISVs can't support in PCIe SSD based servers.
In the same
way that people talk about the economic advantages of
tiering between
different speeds of storage box (in
classical hard drive
SAN based architectures) so too - when you move inside the apps server box you
can get efficiency advantages by being able to tier between different speeds
and classes of SSDs.
It was after that kind of analysis that I
concluded that memory channel SSDs really are a different type of product which
will find their own places in the market - and shouldn't just be regarded as an
alternative to some faster types of PCIe SSD.
And that's why they have
their own distinct column in my architecture matrix proposal for
SSDserver rank.
(Which was the topic of last month's home page blog).
Here's a sanity
check.
There's still a lot of software yet to be written by ISVs
before you'll see some of the differences between PCIe SSDs and MCS appear in
sharper focus. But the business cases are real enough to make it happen.
Update
- added a new boundary condition question
A few weeks after
writing the above I realized I had missed out an important key technical
boundary condition which gives even better insights into the MCS vs PCIe SSDs
question.
So here it is...
Whether you're a systems architect
- or a sales person (who has both types of products in their catalog) - is there
another hard technical boundary where one of these looks better than
the other - and which I haven't mentioned before?
Remember - when it
comes to boundary analysis - you are allowed to imagine extreme assumption
which don't exist in reality- but if they did - help to clarify your thinking
by cutting out distracting conceptual baggage.
what if (and this
is a big IF) both SSD types had the same flash controller management scheme
and the same flash memory?
I'm going to make this easier by
starting with the assumption that you can buy MCS SSDs or PCIe SSDs which have
identical offload controller management for flash and which use the same
generation and type of flash memory. (This isn't a market reality at the moment
- which is why some of the benchmark comparison papers you see are flawed
and reach conclusions which aren't supported by the data.)
In that case
- where would you draw the line?
I think the boundary condition comes
when you have an application which needs low latency and high IOPs - but in
which typically your servers run out of PCIe slots - so in effect you use more
servers so that you can deploy more PCIe SSDs.
So the boundary
condition is for fully packed SSD servers.
And the boundary
condition centers around when you have more free DIMM slots than free PCIe slots
in the server.
It's nearly as simple as that.
Because if you
can add more MCS SSDs in DIMM slots in a server then you can add PCIe SSDs
in PCIe slots - then even if you assume the latency difference between the
interfaces is minimal - you get a considerable latency advantage by staying in
the same server and adding another MCS device.
And in big systems
which use many similar servers - each time you can use another free DIMM slot in
a server - instead of expanding out to another server (or PCIe expnasion rack) -
then you save the cost of another box as well.
That's a real system
capability difference.
And you don't get a taste of this difference by
analyzing benchmarks which simply show small numbers of PCIe SSDs compared to
MCS SSDs. It's when the PCIe SSD expansion model breaks down - because you've
run out slots that you get the biggest benefits of MCS.
And in
intermediate cases (when you have free slots of both types) - sometimes one or
sometimes the other SSD form factor will be the best.
But another
design consideration in the intermediate cases (which is when you have free
slots) is the scalability. Because you can scale to a higher performance in a
single server with MCS than you can with PCIe SSDs (at the same or similar
latency) without breaking out of the box.
Did I really say - it's as
simple as that?
Of course - it's not that simple. Even in this
simplified and imagined case of identical flash controllers in each type of
product. Because the amount of the flash capacity in the server and not just
the just raw performance also come into the mix. But if we start looking at
micro-tiering inside servers - we'll get off the original track.
So my
new technical boundary condition can be sumamrized as this.
If you have
the type of project in which you often wish you could add many more fast low
latency PCIe SSDs into each of your servers but are limited by free slots (and
if you also prefer to minimize the total SSD server box count) then
another option to look at is the availability of free DDR-3 DRAM DIMM
slots and the use of MCS SSDs instead.
See also:-
memory channel
SSD news | | |
..... |
 |
..... |
|
..... |
A good way to think about
what SSDs will do in the 100 per cent SSD enterprise - is to set the limits for
how an enterprise can repurpose, leverage and monetize its data - and increase
process efficiency by analyzing and anticipating customer demands in real-time.
Bottlenecks in the pure SSD datacenter will be much more serious than
in the HDD world - because responding slowly will be equivalent to transaction
failure. |
will SSDs end
bottlenecks? | | |
..... |
Hmm... it looks like you're seriously
interested in SSDs. So please bookmark our
home page and come back again
soon. |
. |
About the publisher - 22
years guiding the enterprise market |
. |
| |
.... |
|
.... |
|
.... |
 |
.... |
From 2007 to 2014 - the
dominant focus and center of SSD accelerator gravity was PCIe SSDs.
It was the rapid and domino effect-like adoption of 3rd party PCIe SSD
accelerators by every mainstream server manufacturer - which changed the way
that SSDs were perceived from being an alienlike technology promoted by industry
outsiders to being a necessary option which was expected to be supported inside
every new server product line.
What we've been seeing in the enterprise PCIe SSD market in recent
years, however, is that most vendors are focusing on making these products more
affordable rather than trying to push the limits of performance. |
DIMM wars in
SSD servers - Memory1 - episode 1 (August 2015) | | |
.... |
 |
.... |
|
PCIe SSDs vs memory channel SSDs |
 |
.... |
|
.... |
 | |
.... |
with an update re SanDisk's agreement to buy
Fusion-io
The article on the left - PCIe SSDs versus
memory channel SSDs are these really different markets? (revisted) - was the
home page blog on StorageSearch.com
throughout most of May
2014.
As you know from
storage history
and as you've heard me say in
other SSD
articles - the proposition that one new SSD thing can replace one old SSD
thing is rarely as simple as the advocates of the new thing say.
Getting closer to home - when PCIe SSDs appeared in the
enterprise market in
2007 - the
main company advocating that technology (Fusion-io) initially
asserted that PCIe SSDs
would replace FC SAN rackmount
SSDs.
That message was really a way of getting market attention -
from the kind of people who were using the main SSD acceleration product types
at that time.
As we know 7 year later - SAN SSDs are still here.
But
PCIe SSDs did indeed become a new established part of the
enterprise SSD landscape.
It's more complicated than that. With the right SSD software - you can
emulate a legacy SAN on PCIe SSD acclerated servers. And you can build a SAN
SSD - using PCIe SSDs inside.
In this article - you'll see why
memory channel SSDs won't simply replace PCIe SSDs.
But it's also
intended to help you figure out - at this early stage of the MCS market - where
you might choose to use memory channel SSDs - if you are the type of person who
has been using the nearest equivalent product type - which is PCIe SSDs.
related
links
update re SanDisk's agreement to buy Fusion-io
In
June 2014 -
SanDisk announced it had agreed to acquire Fusion-io.
I discussed -
What will
SanDisk really get from Fusion-io? - in an article on the
main SSD news page here on
StorageSearch.com
That can
be summarized in this way "the ability to get more enterprise petabytes out
from the same raw flash chips in - by shipping it through better architecture -
is a more significant business factor in the flash memory market today than the
ability to do another cell geometry shrink - or to add a few more toppings on
the 3D pizza "
But in the context of the page you're reading now -
PCIe vs MCS - I think it's appropriate for me to say more about the memory
channel SSD angle.
Firstly - if you look at the product images at the
top of this column - let's consider my choice of graphics.
The
PCIe SSD image.
It was my deliberate choice to use a picture of a PCIe
SSD from Fusion-io to represent the PCIe SSD side of the argument - because
although there are over 50 other enterprise PCIe SSD oems - it was Fusion-io
which created and established this form factor. And at the time of writing this
- Fusion-io is still the leading vendor in the market.
Another
assumption of mine (pre merger) was that - if the MCS market develops into a
sizable market - then due to its performance and latency scalability - it would
be high end PCIe SSD vendors - which would be most affected. (Although not for
several years due to the different maturity levels and ecosystems of the
respective software offerings.)
The
memory channel SSD image.
I deliberately chose an image of SanDisk's
UlltraDIMM to represent this side of the market. Because that's the first
implementation of this type of low latency teraDIMM class of product.
However,
it's important to bear in mind that it's not SanDisk but
Diablo - which pioneered
this concept - and it's Diablo which supplies the RAM side of the hardware
interface and the underlying software stacks.
So unless SanDisk
acquires Diablo too then it's likely that we will see other non exclusive
implementations of MCS which use different controllers and different
non volatile memories
coming from other SSD companies - and not just SanDisk.
And in any case
the memory channel SSD concept itself is bigger than Diablo too.
Other companies like Micron,
Viking.
Innodisk etc - who
have DDR3 compatible flash SSD form factors (but which don't currently use the
DDR3 as the data interface - but simply use it to supply power) might decide to
enter the market with competing products too. If we assume for now that
SanDisk does go ahead with acquiring Fusion-io - but doesn't acquire Diablo -
then some natural questions arise...
- could SanDisk design a CPU-less DDR3 SSD using the
same architecture
concepts as a PCIe SSD from Fusion-io?
And if they did - would that
be worthwhile?
I think technically it could be done.
And in
theory this would reduce the power consumption of the DIMMs compared to the
current (hot) design. But it would take collaboration to do this - and the most
useful way would be to bypass the higher levels of Diablo's software stack.
(Which would fragment Diablo's market and make their other future products MCS
less attractive.)
The result? You'd effectively get a Fusion-io
software compatible SSD in a bunch of DDR-3 slots.
Would it be
worthwhile?
It depends on who makes the server. Maybe...
- could some of Fusion-io's array level virtualization software be used in a
useful way in a server which also has the current design if UltraDIMMs in it?
This
is a scenario I thought about in my very first discussion with
SMART about the
MCS concept.
We know from other studies that when you use Fusion-io
PCIe SSDs in servers which also have large
memory resident databases -
then the overall performance gets better by having the incremental memory
resident assets.
But I'm not confident enough to state whether you
would get a significant performance benefit by having PCIe SSDs implementing
one of the storage space - while also having 1st generation UlltraDIMMs
implementing a large virtual memory space in the same server.
But
under new ownership - I guess it will be easy enough to try that experiment and
to create a list of the types of applications and setups in which it could
escalate the benefits of using each of the 2 technologies. |
.. |
|
| |