this is the home page of
leading the way to the new storage frontier .....
SSD news since 1998
SSD news ..
Rackmount SSDs click for news and directory
rackmount SSDs ..
cloud storage news, vendors, articles  and directory - click here
cloud storage ..
image shows software factory - click to see storage software directory
SSD software ....
pcie  SSDs - click to read article
PCIe SSDs ..
memory channel storage
memory channel SSDs ...
image shows Megabye the mouse reading scroll - click to see the top 30 solid state drive articles
more SSD articles ...
Seagate Nytro PCIe SSD
PCIe SSDs for a wide range of deployments
Nytro flash accelerator cards
from Seagate

SSD controllers
the fastest SSDs
the business of SSD customization
Storage Class Memory - one idea many different approaches; flash endurance - better scope than previously believed; NVMf, NVMe and NVDIMM variations...
what were the big SSD ideas which emerged in 2016?

The memory chip count ceiling around which the SSD controller IP is optimized - predetermines the efficiency of achieving system-wide goals like cost, performance and reliability.
Size matters in SSD controller architecture

NVDIMM tiered memory evolution
Editor:- February 7, 2017, 2017 - "Cache based NVDIMM architectures will be the predominant interface overtaking NVMe within the next 5-10 years in the race for performance" - is the concluding message of a recent presentation by Doug Fink , Director of Product Marketing - Xitore - Next Generation Persistent Memory Evolution - beyond the NVDIMM-N (pdf)

NVDIMM adoption and evolution paper Xitore

Among other things Doug's slides echo a theme discussed before - which is that new memory media (PCM, ReRAM, 3DXpoint) will have to compete in price and performance terms with flash based alternatives and this will slow down the adoption of the alt nvms.

Editor's comments:- Xitore (like others in the SCM DIMM wars market) is working on NVDIMM form factor based solutions and in this and an earlier paper they provide a useful summary of the classifications in this module category.

However, the wider market picture is that the retiring and retiering DRAM story cuts across form factors with many other permutations of feasible implementation possible.

So - whereas the NVDIMM is a seductively convenient form factor for systems architects to think around - the competitive market for big memory will use anything from SSDs on a chip upto (and including) populations of entire fast rackmount SSD boxes as part of such tiered solutions - if the economics, scale, interface fabric and software make the cost, performance and time to market sums emerge in a viable zone of business risk and doability.

SSD news
storage market research
RAM ain't what it used to be

SSD ad - click for more info

"At the technology level, the systems we are building through continued evolution are not advancing fast enough to keep up with new workloads and use cases. The reality is that the machines we have today were architected 5 years ago, and ML/DL/AI uses in business are just coming to light, so the industry missed a need."
From the blog - Envisioning Memory Centric Architecture by Robert Hormuth, VP/Fellow and Server CTO - Dell EMC (January 26, 2017)

Why would any sane SSD company in recent years change its business plan from industrial flash controllers to HPC flash arrays?
a winter's tale of SSD market influences

A conversation I had with Kevin Wagner at Diablo Technologies (in February 2017) began with talking about the benchmarks they have been sharing related to their Memory1 (128GB flash as RAM DIMMs) when running large scale analytics software.

But it finished somewhere entirely unexpected.
controllernomics and user risk reward ratios with "flash as RAM" in big memory

Some SSD vendors do get to a threshold revenue level - despite these online deficiencies because their sales people work hard and their VCs are rich.

But most SSD companies will fail to get to the next level of sustainable business growth - which is where - the customer finds you - and not the other way around - unless they invest more in their online SSD communications assets.
what do I need to know about any new rackmount SSD?

A reader asked me to explain why high volume semiconductor memory makers get into the situation of oversupply and lossy pricing.
a guide to semiconductor memory boom-bust cycles

after AFAs - what's the next box?

cloud adapted memory systems

by Zsolt Kerekes, editor - - January 24, 2017
Last year I had the idea of writing an April 1st blog on the theme of cloud adapted memory systems.

The core idea was to have been a spoof press release about a rackmount memory system for enterprise users which can connect into the fabric of their applications which has been optimized to support cloud services as the next slowest external level of latency.

after All Flash Arrays? - the next SSD boxThe product architecture was a multi-tiered memory systems box in which all the integrated memory resources could be dynamically configured to behave like RAM or SSD storage or persistent memory - depending on the vintage and preferences of the user applications software.

An underlying assumption in my spoof article was that as you move up the latency ladder and move into the slower domains beyond this box - the next level is also likely to be another memory systems box or the cloud.

From the perspective of grounded networked user systems (by which I mean user systems which do not form a native part of the public cloud infrastructure) the cloud (in all its forms - public, private or micro-tiered and local) has replaced the hard drive array and tape library as being the slowest and cheapest data storage devices which your data software might encounter. Everything else is memory.

In this scenario there's no role for user software which was written around a hard drive access model. Indeed as long term readers already know the mission of identifying and removing all such "HDD driven" (prefetch, cache, and pack it all up) embedded software activities has - for the past 10 years - been a secret SSD software weapon used by many leading companies to improve the speed and utilization of their integrated solid state storage systems.

Although, for economic reasons, users might still encounter hard drives in the cloud, or a micro cloud, or a hybrid storage appliance, nevertheless from the perspective of planning new systems for users - the key strategic device for enterprise data performance is the memory system.

raw chip memory... how much as SSD? how much as memory?

This raises the question:- what proportion of the raw semiconductor memory capacity ought to be usable as storage (SSD) or usable as memory (RAM - as in "random access memory" which operates with the software like DRAM but which could be implemented by other technologies).

Ratios of one thing to another have often been useful indicators of changing expectations in the storage market - because they are simple to grasp - even when the associated technologies are not.

Despite the attachment constraints of legacy interface types (same chip datapath, DDR-X, PCIe, SAS, IB, GbE, photonic etc ) I anticipate that emulating SSD arrays and / or big RAM (these two choices determine the "personality" of the installed memory resources in a way we can understand today) could one day (with appropriate datapaths) be as easy to adjust as the ratios of flash memory to hard drives which we saw being promised in a clever "try before you buy" customer experience and business development tool in 2014 - the flash juice strength slider mix - from Tegile Systems - which they used to woo impecuniously minded hybrid array inclined users closer towards the benefits of more expensive to buy all flash arrays.

The more I thought about it the more I realized that as an April 1st type of article this cloud adapted memory systems blog just wouldn't work. It was already too close to the kind of products we're already seeing in the market.

But as a thought provoking feature it got me thinking about some related issues. See if any of these strike a chord with you.

expectations for memory storage systems

In the past we've always expected the data capacity of memory systems (mainly DRAM) to be much smaller than the capacity of all the other attached storage in the same data processing environment. The rationale for this being the economies (dare I say cost) of access time, data density and electrical power - which were traditionally implemented by a many different types of storage media (solid state, magnetic and optical) each having their own unique characteristics.

In a modern data system - even one which is entirely solid state - the arguments for tiered products are the same as they have always been because "faster" usually means "runs hotter". But this new world of "memory systems everywhere" opens the possibility that random read access times (across a significant range of applications data) is similar even if the random write time (including verify and play it again Sam) aspects of that data cycle remains variable.

But would enterprise systems be more efficient (and run faster and at lower cost) if all the software was rewritten to assume that memory was large (and could be persistent) whereas storage (initially supported to emulate legacy applications and grow revenue for such systems) was small?

For a longer discussion of such issues see - where are we heading with memory intensive systems?

frames of relativity - where is the cloud?

Earlier in this blog when describing the relative access times of the memory systems box compared to data in the cloud I was assuming that the frame of reference was from the perspective of the user's system (which is located outside the cloud). That's why I said the cloud would replace the hard drive as the slowest virtual peripheral. Of course if you're thinking about systems architecture from the angle of designing infrastructure components in the cloud - then that "slowness" isn't generally true. And you will still be designing some boxes which support physical hard drives (until a cheaper option comes along or until you can monetize the seldom accessed data in a better way).

software's role in acceleration - worth the wait

As with the SSD market in the past so too with the memory systems market there will be bigger and faster adoption of new technologies when there is more software speaking the same language. Having products which interoperate with legacy software is business plan "A" and will fund some interesting business stories. But getting to the next stage of the memory systems market where the installed memory base (of randomly accessible memory) begins to creep up to the size of the installed capacity of SSD storage - will require a lot of new software which can leverage the memory assets with less backward glances.

You might say we've already got SSD software solutions which can repurpose flash into useful roles as big memory. Pioneering products were:- So why do we need any new hardware at all?

We've got the new memories coming anyway.

Some of them will stick in easy to identify places. Others have yet to find sustainable new roles. What are those roles? I'll be dealing with those issues in a future blog here - which I've tentatively called - the survivor's guide to all semiconductor memory and the diminishing role of form factors.

See you then.

PS - Although I didn't publish my spoof article about "cloud adaptive memory" (which was to have been its original title back in early 2016 - I did spend a lot of time thinking about the consequences of those ideas. And they clearly influenced the choice of the serious articles and news coverage which I did apply myself to as you may have seen.

To steer your way to future markets sometimes you have to consider ideas which at first seem like a ludicrous stretch from reality and follow them through for a while as if they were real before you can recognize that the truths which emerge from analyzing such notions can be useful.

Previous examples of spoof articles which were useful forerunners of reality discussed issues like why SSDs would replace HDDs (as cheap bulk storage) even if HDDs were free (in the article towards the petabyte SSD), and the complexities of signal processing in flash level discrimination (and data itegrity) - which we now call adaptive DSP (here's a link to the 2008 spoof article).

For the philosophy behind this approach see my article - Boundaries Analysis in SSD Market Forecasting .

PPS - In 2014 I discussed the idea of unified storage (SAN and NAS) being the old fashioned "gentlemen's club" way in an interview with Frankie Roohparvar (who at that time was CEO of Skyera and is now Chief Strategy Officer at Xitore).

I mischievously sounded him out on my expectation of being able to add in the capability of emulating big persistent memory into the new dynasty unified solid state data box feature set. For that story see - Skyera's new skyHawk FS in archived news.

a storage architecture guide

are you ready to rethink RAM?

playing the enterprise SSD box riddle game

hidden and missing segments of opportunities for rackmount flash

why do you need a supported RAM disk emulation in your new "flash as RAM" solution?
SSD ad - click for more info

storage search banner
showing the way ahead for all SSD and memory systems
1 big market lesson and 4 shining technology companies
SSD ad - click for more info

Lightning, tachIOn , WarpDrive ... etc
Inanimate Power, Speed and Strength Metaphors in SSD brands

What we've got now is a new SSD market melting pot in which all performance related storage is made from memories and the dividing line between storage and memory is also more fluid than before.
where are we heading with memory intensive systems?

is data remanence in NVDIMMs a new risk factor?
maybe the risk was already there before with DRAM

Some suppliers will quote you higher DWPD even if nothing changes in the BOM.
what's the state of DWPD?

DRAM's reputation for speed is like the old story about the 15K hard drives (more of the same is not always quickest nor best)
latency loving reasons for fading out DRAM

Is more always better?
The ups and downs of capacitor hold up in 2.5" flash SSDs

Reliability is more than just MTBF... and unlike Quality - it's not free.
the SSD reliability papers - classic collection

In SSD land - rules are made to be broken.
7 tips to survive and thrive in enterprise SSD

There's a genuine characterization problem for the SCM industry which is:- what are the most useful metrics to judge tiered memory systems by?
is it realistic to talk about memory IOPS?

Many of the important and sometimes mysterious behavioral aspects of SSDs which predetermine their application limitations and usable market roles can only be understood when you look at how well the designer has dealt with managing the symmetries and asymmetries which are implicit in the underlying technologies which are contained within the SSD.
how fast can your SSD run backwards?

The enterprise SSD story...

why's the plot so complicated?

and was there ever a missed opportunity in the past to simplify it?
the elusive golden age of enterprise SSDs

How committed (really) are these companies
to the military SSD business?
a not so simple list of military SSD companies

Can you trust market reports and the handed down wisdom from analysts, bloggers and so-called industry experts?
heck no! - here's why

Why do SSD revenue forecasts by enterprise vendors so often fail to anticipate crashes in demand from their existing customers?
meet Ken and the enterprise SSD software event horizon

the past (and future) of HDD vs SSD sophistry
How will the hard drive market fare...
in a solid state storage world?

Compared to EMC...

ours is better
can you take these AFA companies seriously?

Now we're seeing new trends in pricing flash arrays which don't even pretend that you can analyze and predict the benefits using technical models.
Exiting the Astrological Age of Enterprise SSD Pricing

90% of the enterprise SSD companies which you know have no good reasons to survive.
market consolidation - why? how? when?

Why buy SSDs?
6 user value propositions for buying SSDs

If you spend a lot of your time analyzing the performance characteristics and limitations of flash SSDs - this article will help you to easily predict the characteristics of any new SSDs you encounter - by leveraging the knowledge you already have.
flash SSD performance characteristics and limitations

The memory chip count ceiling around which the SSD controller IP is optimized - predetermines the efficiency of achieving system-wide goals like cost, performance and reliability.
size matters in SSD controller architecture

Are you whiteboarding alternative server based SSD / SCM / SDS architectures? It's messy keeping track of those different options isn't it? Take a look at an easy to remember hex based shorthand which can aptly describe any SSD accelerated server blade.
what's in a number? - SSDserver rank