this is the home page of
leading the way to the new storage frontier .....
SSD news since 1998
SSD news ..
Rackmount SSDs click for news and directory
rackmount SSDs ..
cloud storage news, vendors, articles  and directory - click here
cloud storage ..
image shows software factory - click to see storage software directory
SSD software ....
pcie  SSDs - click to read article
PCIe SSDs ..
memory channel storage
memory channel SSDs ...
image shows Megabye the mouse reading scroll - click to see the top 30 solid state drive articles
more SSD articles ...
Seagate Nytro PCIe SSD
PCIe SSDs for a wide range of deployments
Nytro flash accelerator cards
from Seagate

SSD controllers
the fastest SSDs
the business of SSD customization
Storage Class Memory - one idea many different approaches; flash endurance - better scope than previously believed; NVMf, NVMe and NVDIMM variations...
what were the big SSD ideas which emerged in 2016?

SSD ad - click for more info


CPUs in SSDs...

whatever made a good SSD (and controller) in the past will no longer be adequate in the future
(Editor:- January 26, 2017 - This is part of what I said to a reader this week about optimizing CPUs for use in SSDs.)

The characteristics of CPUs used within COTS SSDs varies widely.

One direction of influence comes from the anticipated market.

And this is the aspect which is easiest to understand.

So that results in different preferences for enterprise SSDs which have high solo performance (such as Mangstor) compared to 2.5" SSDs deployed in arrays.

And power consumption can be a key factor in industrial SSDs.

Hyperstone's controllers which are optimized for low power consumption are very different in their choice of CPU and necessarily flash algorithms too because they can't depend on the type of RAM cache which makes enterprise endurance management code easier to design.

But unfortunately I think that analyzing what happened in the past in the SSD controller / SSD processor market isnt a reliable predictor for future controllers.

An influence which has been trickling down from lessons learned in the array market is the powerful system level benefit of passing intelligent control from a viewpoint which is outside the ken of the SSD controller located in the flash storage form factor.

And partly due to that applications awareness and likely to tear apart many controller business plans is the contradictory requirements between custom and standard products. (Described in more detail in my 2015 SSD ideas blog (SSD directional push-me pull-yous).

And - as always with the SSD market different companies can take very different approaches to how they pick the permutations which deliver their ideal SSD.

Added to that I think a new emerging factor in memory systems will be whether the CPUs are able to deliver applications level benefits byintegrating nvm or persistent memory within the same die as the processor itself.

Thats partly a process challenge but also a massive architectural and business gamble.

In the past it has been obvious that some SSDs incorporated nvm registers or other small persistent storage memory (apart from the external flash) to deliver power fail data integrity features which didn't need capacitor holdup.

What is less clear is the direction of travel with tiered memory on the CPU.

When it comes to chip space budget - is it worth trading cores to enable bigger (slower) persistent memory upstream of conventional cache?

This was already complicated when external memory was assumed to be only DRAM and being fed from HDD storage buckets. The new latency buckets of SSD storage and bigger tiered semiconductor main memory change the latency of the data bucket chain and the ability to perform in-situ memory processing may change CPU architecture too.

For some applications that might be a good trade. Better integration at lower latency with the CPU and the memory system. And merging of CPU and SSD functionality. But this would be a risky experiment for a component vendor who doesn't have a systems level marketing channel to sell the enhanced merged feature set.

The only clear thing is that whatever made a good SSD in the past will no longer adequate in the future.

More flexibility will be key.

It's not just the CPU making the SSD work better. The SSD makes the CPU work better too.

SSD-CPU equivalence and SSD and memory systems equivalence aren't new ideas - but the scope for innovative improvement is still massive.

SSD ad - click for more info


after AFAs - what's the next box?

cloud adapted memory systems

by Zsolt Kerekes, editor - - January 24, 2017
Last year I had the idea of writing an April 1st blog on the theme of cloud adapted memory systems.

The core idea was to have been a spoof press release about a rackmount memory system for enterprise users which can connect into the fabric of their applications which has been optimized to support cloud services as the next slowest external level of latency.

after All Flash Arrays? - the next SSD boxThe product architecture was a multi-tiered memory systems box in which all the integrated memory resources could be dynamically configured to behave like RAM or SSD storage or persistent memory - depending on the vintage and preferences of the user applications software.

An underlying assumption in my spoof article was that as you move up the latency ladder and move into the slower domains beyond this box - the next level is also likely to be another memory systems box or the cloud.

From the perspective of grounded networked user systems (by which I mean user systems which do not form a native part of the public cloud infrastructure) the cloud (in all its forms - public, private or micro-tiered and local) has replaced the hard drive array and tape library as being the slowest and cheapest data storage devices which your data software might encounter. Everything else is memory.

In this scenario there's no role for user software which was written around a hard drive access model. Indeed as long term readers already know the mission of identifying and removing all such "HDD driven" (prefetch, cache, and pack it all up) embedded software activities has - for the past 10 years - been a secret SSD software weapon used by many leading companies to improve the speed and utilization of their integrated solid state storage systems.

Although, for economic reasons, users might still encounter hard drives in the cloud, or a micro cloud, or a hybrid storage appliance, nevertheless from the perspective of planning new systems for users - the key strategic device for enterprise data performance is the memory system.

raw chip memory... how much as SSD? how much as memory?

This raises the question:- what proportion of the raw semiconductor memory capacity ought to be usable as storage (SSD) or usable as memory (RAM - as in "random access memory" which operates with the software like DRAM but which could be implemented by other technologies).

Ratios of one thing to another have often been useful indicators of changing expectations in the storage market - because they are simple to grasp - even when the associated technologies are not.

Despite the attachment constraints of legacy interface types (same chip datapath, DDR-X, PCIe, SAS, IB, GbE, photonic etc ) I anticipate that emulating SSD arrays and / or big RAM (these two choices determine the "personality" of the installed memory resources in a way we can understand today) could one day (with appropriate datapaths) be as easy to adjust as the ratios of flash memory to hard drives which we saw being promised in a clever "try before you buy" customer experience and business development tool in 2014 - the flash juice strength slider mix - from Tegile Systems - which they used to woo impecuniously minded hybrid array inclined users closer towards the benefits of more expensive to buy all flash arrays.

The more I thought about it the more I realized that as an April 1st type of article this cloud adapted memory systems blog just wouldn't work. It was already too close to the kind of products we're already seeing in the market.

But as a thought provoking feature it got me thinking about some related issues. See if any of these strike a chord with you.

expectations for memory storage systems

In the past we've always expected the data capacity of memory systems (mainly DRAM) to be much smaller than the capacity of all the other attached storage in the same data processing environment. The rationale for this being the economies (dare I say cost) of access time, data density and electrical power - which were traditionally implemented by a many different types of storage media (solid state, magnetic and optical) each having their own unique characteristics.

In a modern data system - even one which is entirely solid state - the arguments for tiered products are the same as they have always been because "faster" usually means "runs hotter". But this new world of "memory systems everywhere" opens the possibility that random read access times (across a significant range of applications data) is similar even if the random write time (including verify and play it again Sam) aspects of that data cycle remains variable.

But would enterprise systems be more efficient (and run faster and at lower cost) if all the software was rewritten to assume that memory was large (and could be persistent) whereas storage (initially supported to emulate legacy applications and grow revenue for such systems) was small?

For a longer discussion of such issues see - where are we heading with memory intensive systems?

frames of relativity - where is the cloud?

Earlier in this blog when describing the relative access times of the memory systems box compared to data in the cloud I was assuming that the frame of reference was from the perspective of the user's system (which is located outside the cloud). That's why I said the cloud would replace the hard drive as the slowest virtual peripheral. Of course if you're thinking about systems architecture from the angle of designing infrastructure components in the cloud - then that "slowness" isn't generally true. And you will still be designing some boxes which support physical hard drives (until a cheaper option comes along or until you can monetize the seldom accessed data in a better way).

software's role in acceleration - worth the wait

As with the SSD market in the past so too with the memory systems market there will be bigger and faster adoption of new technologies when there is more software speaking the same language. Having products which interoperate with legacy software is business plan "A" and will fund some interesting business stories. But getting to the next stage of the memory systems market where the installed memory base (of randomly accessible memory) begins to creep up to the size of the installed capacity of SSD storage - will require a lot of new software which can leverage the memory assets with less backward glances.

You might say we've already got software solutions which can repurpose flash into useful roles as big memory so why do we need any new hardware at all?

We've got the new memories coming anyway. Some of them will stick in easy to identify places. Others have yet to find sustainable new roles. What are those roles? I'll be dealing with those issues in my next blog here - which I've tentatively called - the survivor's guide to all semiconductor memory and the diminishing role of form factors.

See you then.

PS - Although I didn't publish my spoof article about "cloud adaptive memory" (which was to have been its original title back in early 2016 - I did spend a lot of time thinking about the consequences of those ideas. And they clearly influenced the choice of the serious articles and news coverage which I did apply myself to as you may have seen.

To steer your way to future markets sometimes you have to consider ideas which at first seem like a ludicrous stretch from reality and follow them through for a while as if they were real before you can recognize that the truths which emerge from analyzing such notions can be useful.

Previous examples of spoof articles which were useful forerunners of reality discussed issues like why SSDs would replace HDDs (as cheap bulk storage) even if HDDs were free (in the article towards the petabyte SSD), and the complexities of signal processing in flash level discrimination (and data itegrity) - which we now call adaptive DSP (here's a link to the 2008 spoof article).

For the philosophy behind this approach see my article - Boundaries Analysis in SSD Market Forecasting .

PPS - In 2014 I discussed the idea of unified storage (SAN and NAS) being the old fashioned "gentlemen's club" way in an interview with Frankie Roohparvar (who at that time was CEO of Skyera and is now Chief Strategy Officer at Xitore).

I mischievously sounded him out on my expectation of being able to add in the capability of emulating big persistent memory into the new dynasty unified solid state data box feature set. For that story see - Skyera's new skyHawk FS in archived news.

a storage architecture guide

are you ready to rethink RAM?

playing the enterprise SSD box riddle game

SSD ad - click for more info

Why would any sane SSD company in recent years change its business plan from industrial flash controllers to HPC flash arrays?
a winter's tale of SSD market influences

storage search banner
showing the way ahead for all SSD and memory systems
1 big market lesson and 4 shining technology companies
SSD ad - click for more info

Lightning, tachIOn , WarpDrive ... etc
Inanimate Power, Speed and Strength Metaphors in SSD brands

What we've got now is a new SSD market melting pot in which all performance related storage is made from memories and the dividing line between storage and memory is also more fluid than before.
where are we heading with memory intensive systems?

is data remanence in NVDIMMs a new risk factor?
maybe the risk was already there before with DRAM

Some suppliers will quote you higher DWPD even if nothing changes in the BOM.
what's the state of DWPD?

DRAM's reputation for speed is like the old story about the 15K hard drives (more of the same is not always quickest nor best)
latency loving reasons for fading out DRAM

Is more always better?
The ups and downs of capacitor hold up in 2.5" flash SSDs

Reliability is more than just MTBF... and unlike Quality - it's not free.
the SSD reliability papers - classic collection

In SSD land - rules are made to be broken.
7 tips to survive and thrive in enterprise SSD

There's a genuine characterization problem for the SCM industry which is:- what are the most useful metrics to judge tiered memory systems by?
is it realistic to talk about memory IOPS?

Many of the important and sometimes mysterious behavioral aspects of SSDs which predetermine their application limitations and usable market roles can only be understood when you look at how well the designer has dealt with managing the symmetries and asymmetries which are implicit in the underlying technologies which are contained within the SSD.
how fast can your SSD run backwards?

The enterprise SSD story...

why's the plot so complicated?

and was there ever a missed opportunity in the past to simplify it?
the elusive golden age of enterprise SSDs

How committed (really) are these companies
to the military SSD business?
a not so simple list of military SSD companies

Can you trust market reports and the handed down wisdom from analysts, bloggers and so-called industry experts?
heck no! - here's why

Why do SSD revenue forecasts by enterprise vendors so often fail to anticipate crashes in demand from their existing customers?
meet Ken and the enterprise SSD software event horizon

the past (and future) of HDD vs SSD sophistry
How will the hard drive market fare...
in a solid state storage world?

Compared to EMC...

ours is better
can you take these AFA companies seriously?

Now we're seeing new trends in pricing flash arrays which don't even pretend that you can analyze and predict the benefits using technical models.
Exiting the Astrological Age of Enterprise SSD Pricing

90% of the enterprise SSD companies which you know have no good reasons to survive.
market consolidation - why? how? when?

Why buy SSDs?
6 user value propositions for buying SSDs

If you spend a lot of your time analyzing the performance characteristics and limitations of flash SSDs - this article will help you to easily predict the characteristics of any new SSDs you encounter - by leveraging the knowledge you already have.
flash SSD performance characteristics and limitations

The memory chip count ceiling around which the SSD controller IP is optimized - predetermines the efficiency of achieving system-wide goals like cost, performance and reliability.
size matters in SSD controller architecture

Are you whiteboarding alternative server based SSD / SCM / SDS architectures? It's messy keeping track of those different options isn't it? Take a look at an easy to remember hex based shorthand which can aptly describe any SSD accelerated server blade.
what's in a number? - SSDserver rank