this is the home page of
leading the way to the new storage frontier .....
SSD history
SSD history ..
click to see more SSD  Bookmarks
the SSD Bookmarks ..
image shows megabyte waving the winners trophy - there are over 200 SSD oems - which ones matter? - click to read article
top SSD companies ..
click to read article - sugaring  MLC for the enterprise
adding "e" to MLC ....

the enterprise SSD story

why's the plot so complicated?

and was there ever a best time to simplify it?

by Zsolt Kerekes, editor - June 17, 2015
Can I tell you any single most useful thing extracted from all the thousands of incidental things and trivia I've learned from the SSD market?

In answering readers' questions about the market and in constructing major articles - I've sometimes asked myself - how do I know this stuff?

Part of explanation which I gave in an earlier article is that if you spend decades of time thinking a lot about a single subject and while doing that talk to most of the experts in that subject - and also talk to a lot of other people who have their own reasons to be interested - then something does rub off and stick.

As an educational program for understanding the SSD market - I don't recommend it to anyone else.

It's not scalable and not amenable to replication. And there are faster and more efficient ways for you to learn most of what you need to know. Which I hope includes trawling sites like this one.

So what have I learned? And what's the single most useful thing?

From a technology point of view - the technology is still changing.

And how SSDs interact with other parts of your data processing assets still has many adaptations and evolutions to get through before things can settle down.

The reason the adaptation continues to be so complicated is this.

In the 1970s when large scale integrated silicon technology began to disrupt the computer market with devices like the microprocessor and DRAM - they gave birth to a new generation of software companies. When using the earliest microprocessors you had to write all your own software. But whatever you did with the newest CPUs and memories was invariably cheaper than what had been done before. By the mid to late 1980s - these systems were faster too - and were starting to replace enterprise servers - and to rely on a complex ecosystem of software bigger than anything which had been seen before.

I was lucky in my observation point in those days to have explored the benefits of applying permutations of multiple processors, embryonic RAID and solid state storage as performance accelerators in real-time environments which included 3G databases and Unix platforms.

It was a decade later - when reporting on the benefits of enterprise SSDs here on this site in the late 1990s and early 2000s - that I realized that somehow the insights about SSDs as applications accelerators - were not widely appreciated.

The next few paragraphs provides a summary of that early accidental eureka moment for me - from my linkedin page and memory.

start of some bio stuff - related to this article

The SSD-CPU equivalence idea didn't seem like a big deal to me at the time (about 1988) as storage was just one of many bottlenecks my customers needed to solve.

When I was recruited into a startup called Databasix in the late 1980s - the founders could already see that new opportunities were being created to do new things based on AI and database technology - but that doing useful things in real time would only be possible by massive parallelization of CPUs (which was not a commodity at that time) and parallelization of disks (RAID for throughput) and faster storage for the latency critical parts (solid state storage).

So I and my team spent all our time playing with the fastest technology toys we could borrow (evaluate) or buy or modify and figuring out what the nature of bottlenecks were and how best to solve them in a virtualized way if possible - because many of these toys were short lived and almost every customer project changed a lot from the time it was conceived to the time it was delivered. And the only way to make it work as a business would be to make most of the core software reusable.

Anyway - then as now - there were only a small number of users who needed such crazy advanced platforms.

Finding the performance fanatics with budgets was the "business model." Our sales people found some. Which was amazing in itself as that was before the days of internet marketing. We learned a lot in a short space of years by providing platforms to researchers in the industrial, military, seismic research and related markets and also an innovative and long lived production broadcast program routing system for a major broadcaster too. It was a lot of work. We wrote all our own drivers for everything. And our objective was always to reach wire speed of whatever device we connected.

end of the bio stuff - (I'm not looking for another job BTW)

A big new hypothetical question for SSD market thinkers?

All of which - above- is the long way of introducing a question which has occurred to me often recently.

Was there ever an ideal earlier time and opportunity to inject SSD DNA into enterprise computing architecture (and OSes) which could have prevented the buildup of complex integration and associated rip and replace and bypass surgery - of so many layers of systems software - which have inevitably followed?

I'm not sure.

Very few people realized the value of doing this even in 1988. And in those days if you tried to integrate SSD acceleration yourself - it involved a lot of analysis and rewriting OS kernels to make it work in a useful way.

Certainly the software industry didn't do it.

And before then there wouldn't have been a clear need.

And after 1988 (and throughout the subsequent decade) as the enterprise became fascinated and then bewitched by RISC/Unix most CPU designers were hell bent on a course to just add more cores and DRAM and wider memory busses and faster clocks as the simplest way to keep getting faster.

So looking back now - it's clear that the standard models for computer hardware and software architecture in the silicon chip age had evolved for over 40 years without the concept of another economically useful latency layer between hard drives and DRAM.

That didn't stop SSD pioneers from trying to break in.

But it was through cracks and rare business opportunities in which nearly every SSD sale required a huge customer learning curve.

It was as late as 2008 to 2009 that SSD accelerators became offered as a standard option by all mainstream enterprise server vendors and it was from that time onwards that we saw the birth of a true SSD software market.

You can see why the enterprise SSD market is the most bewildering part of the enterprise market.

It's not about flash memory.

SSDs are changing a market (data processing) which was designed without any original conception of SSDs being there in the first place.

In the modern era - SSDs began sneaking into purchase orders whenever the gaps (what was possible with SSDs compared to what was possible without) looked easy enough and lucrative enough to edge into.

And the future of the story (as I've said many times before) is that future data architectures will be managed by reference to their different interconnected storage assets (SSD) rather than (as in the past) by reference to their servers.

And all the critical costs and management decisons will be about how to control the cost of the SSDs.

Note I don't use the term "flash" here deliberately - because the enterprise SSD story started before flash was an enterprise technology and will most likely persist till after too.

Anyway - somewhere at the beginning of this blog I hinted that I might say something useful about all the stuff I've learned from talking to so many founders of SSD companies and visionnaries in the market.

Surprisingly the one enduring and universally useful true thing I've learned isn't about the technology.

(Today's hot chip management technology looks like steam punk when viewed from the future.)

No it's this...

When talking to long time SSD pioneers and serial company founders and those whose creations have transformed this industry of ours - I often ask - why are you still doing this? It can't be the money. Surely you could just retire and go fishing or start a boutique investment company.

The answer every time is something like this.

The SSD market is the most exciting place to be working right now. Why would you want to be somewhere else?

For me too - it's the challenge of figuring out where all these roads are going and what will the landscape look when the whole territory has been discovered, staked out, built up, rustled and burned down and then rebuilt again. The unknown and exciting combined with tantalizingly predictable narratives with often surprising characters is the SSD story which keeps me going.

In conclusion there never was a best time in the past for the enterprise SSD market.

No past "Heroic Golden Age of SSD".

The best time is still now.
SSD ad - click for more info
In January 2017 I was approached by a startup which has its own processor architecture and IP who asked what I could share with them about ways in which their product could be optimized for use in the SSD market. The question prompted me to update and aggregate some of my views on this...
optimizing CPUs for SSDs in the Post Modernist Era of SSD
storage search banner
the golden age of SSDs
Was there ever a Heroic Golden Age
for the enterprise SSD market?

All Flash Array!

It was a nice idea for marketers while it lasted.
after AFAs - what's next?

what has it got in its flash boxes My Precious?
playing the enterprise SSD box riddle game

Every year I learn some new important new ideas about SSDs.

But every year I also have to remember to forget or discard some old ideas which were vital to know before but which are no longer useful, valid or true.

SSD ad - click for more info

The amount of fast flash storage needed to serve enterprise needs is very much less than the legacy raw HDD capacity.

But new generations of software bundled with new data architecture concepts will get better untilization from the enterprise flash you've already installed too.
meet Ken - and the enterprise SSD software event horizon

For many of them a single customer like that is bigger than their whole business plan.
what can you infer when flash array startups compare themselves to EMC?

DRAM (in 2016) has stayed stuck in the Y2K era of enterprise server latency and that's why its future will go the same way as the 15K hard drive.
latency loving reasons for fading out DRAM
in the virtual memory slider mix




Let's look at New Dynasty software.

It sounds simple enough. New Dynasty is a software environment and architecture which is planned at the outset to operate with SSDs.

But there are many ways of doing this even if you start out with the idea of only looking at standard servers and standard SSDs. Because adding SSD software into the mix brings its own multiplication factors.

What does a server node look like? How is it clustered or scaled? Is the server node part of the storage? Is the server node a building block for all the storage? Where should the storage live? How should it be tiered? And BTW - we're now more than willing to tier memory too.
Decloaking hidden segments in enterprise SSD

SSD ad - click for more info

First you learned about SLC (good flash).

Then you learned about MLC (naughty flash when it played in the enterprise - but good enough for the short attention span of consumers).

Then naughty MLC SSDs learned how to be good. (When strictly managed.)
sugaring flash for the enterprise

The big OS companies did nothing!

They were used to getting technology roadmaps from the processor makers - who told them exactly how much memory and what type of interfaces they would see in a 5 to 10 year lookahead timeframe. And SSDs weren't in those plans.
how did we get into this mess with SSD software?


Do you remember the movie Back to the Future?
are we ready for infinitely faster RAM?


introducing Memory Defined Software???
yes seriously - these words are in the right order


As a simplification you could say that all legacy platforms are really just inefficiently configured, archeologically interesting, subset implementations of future hyperconverged architecture.
towards SSD everywhere software