this is the home page of  StorageSearch.com
1998 to 2018 - leading the way to the new storage frontier .....
SSD news since 1998
SSD news ..
the fastest SSDs - click to read article
the fastest SSDs ..
RAM image - click for RAM directory, articles and news
RAM news ..
SSD symmetries article
SSD symmetries ..
storage history
SSD history ..
image shows megabyte waving the winners trophy - there are over 200 SSD oems - which ones matter? - click to read article
top SSD companies ..
DRAM latency
DRAM's latency secret ....
..
Since the early 1970s there have been 3 revolutionary disruptive influences in the electronics and computing markets.
  • the microprocessor
  • the commercialization of the internet
  • the transition of digital storage to solid state drives in the modern era of SSDs
That's what I wrote in my 2012 article - comparing the SSD market today to earlier tech disruptions.

StorageSearch.com has been about thought leadership in the SSD market and was the first publication to recognize and promote the tremendous disruptive growth potential of SSDs and the memoryfication of computing architecture.

Since the 1990s our readers have been accelerating the growth of this industry and setting its direction and agenda.

That work is now done.

On December 25, 2018 - the site will be frozen as a historic record of the past years when I was involved as a commentator and memorist of our industry and in 2019 it will operate under new management and a new owner.

About the publisher (founded 1991)
.....
While it's a safe prediction that cloud scale actors (Google, Amazon, Baidu) will continue to be at the forefront of pushing the demand for greater efficiency and cost effectiveness performance - that doesn't tell us who's going to supply them.
Top SSD Companies in Q1 2018
..
SSD controllers
DWPD - examples
hard drives - articles
history of SSD market
storage history - editor selected stories
storage reliability - news & white papers
..
SSD ad - click for more info ..

..
1.0" SSDs
1.8" SSDs
2.5" SSDs
3.5" SSDs

1973 - 2017 - the SSD story

2013 - SSD market changes
2014 - SSD market changes
2015 - SSD market changes
2016 - SSD market changes
2017 - SSD market changes

20K RPM HDDs? - SSD killed RPM

About the publisher - 1991 to 2018
Adaptive R/W flash IP + DSP ECC
Acquired SSD companies
Acquired storage companies
Advertising on StorageSearch.com
Analysts - SSD market
Analysts - storage market
Animal Brands in the storage market
Architecture - network storage
Articles - SSD
Auto tiering SSDs

Bad block management in flash SSDs
Benchmarks - SSD - can your trust them?
Big market picture of SSDs
Bookmarks from SSD leaders
Branding Strategies in the SSD market

Chips - storage interface
Chips - SSD on a chip & DOMs
Click rates - SSD banner ads
Cloud with SSDs inside
Consolidation trends in the enterprise flash market
Consumer SSDs
Controller chips for SSDs
Cost of SSDs

Data recovery for flash SSDs?
DIMM wars in the SSD market
Disk sanitizers
DRAM (lots of stories)
DRAM remembers
DWPD - examples from the market

Efficiency - comparing SSD designs
Encryption - impacts in notebook SSDs
Endurance - in flash SSDs
enterprise flash SSDs history
enterprise flash array market - segmentation
enterprise SSD story - plot complications
EOL SSDs - issues for buyers

FITs (failures in time) & SSDs
Fast purge / erase SSDs
Fastest SSDs
Flash Memory

Garbage Collection and other SSD jargon

Hard drives
High availability enterprise SSDs
History of data storage
History of disk to disk backup
History of the SPARC systems market
History of SSD market
Hold up capacitors in military SSDs
hybrid DIMMs
hybrid drives
hybrid storage arrays

Iceberg syndrome - SSD capacity you don't see
Imprinting the brain of the SSD
Industrial SSDs
Industry trade associations (ORGs)
InfiniBand
IOPS in flash SSDs

Jargon - flash SSD

Legacy vs New Dynasty - enterprise SSDs
Limericks about flash endurance

M.2 SSDs
Market research (all storage)
Marketing Views
Memory Channel SSDs
Memory Defined Software - yes really
Mice and storage
Military storage

News
Notebook SSDs - timeline
PATA SSDs
PCIe SSDs
Petabyte SSD roadmap
Power loss - sudden in SSDs
Power, Speed and Strength in SSD brands
PR agencies - storage and SSD
Processors in SSD controllers

Rackmount SSDs
RAID systems (incl RAIC RAISE etc)
RAM cache ratios in flash SSDs
RAM memory chips
RAM SSDs
RAM SSDs versus Flash SSDs
Reliability - SSD / storage
RPM and hard drive spin speeds

SAS SSDs
SATA SSDs
SCSI SSDs - legacy parallel
Security
Services
Software
Symmetry in SSD design

Tape libraries

Test Equipment
Top 20 SSD companies
Training
Tuning SANs with SSDs

USB storage
User Value Propositions for SSDs

VCs in SSDs
VCs in storage - 2000 to 2012
Videos - about SSDs

Zsolt Kerekes - (editor linkedin)

..
animal brands in SSD
..
The SSD market isn't scared of mice.

But mice aren't the only animals you can find in SSD brands.

There are many other examples of animal brands in SSD as you can see in this collected article.

And before the SSD market became the most important factor in the storage market there were also many animals to be found in other types of storage too.
..


StorageSearch.com is published by ACSL founded in 1991.

© 1992 to 2018 all rights reserved.

Editor's note:- I currently talk to more than 600 makers of SSDs and another 100 or so companies which are closely enmeshed around the SSD ecosphere.

Most of these SSD companies (but by no means all) are profiled here on the mouse site.

I still learn about new SSD companies every week, including many in stealth mode. If you're interested in the growing big picture of the SSD market canvass - StorageSearch will help you along the way.

Many SSD company CEOs read our site too - and say they value our thought leading SSD content - even when we say something that's not always comfortable to hear. I hope you'll find it it useful too.

Privacy policies.

We never compile email lists from this web site, not for our own use nor anyone else's, and we never ask you to log-in to read any of our own content on this web site. We don't do pop-ups or pop-unders nor blocker ads and we don't place cookies in your computer. We've been publishing on the web since 1996 and these have always been the principles we adhere to.

are we ready for infinitely faster RAM?

(and what would it be worth)

by Zsolt Kerekes, editor - May 14, 2018
infinitely faster memory click to enlargeIf someone could offer you a memory system which had the same storage density (bits per chip / module / box) as mainstream RAM - but which had latency and bandwidth (as measured by what the application sees) which was infinitely faster - could we use it? - how much would that be worth? and how would it change markets? For the past 25 years the computer market has voted with its spending for bigger rather than faster memory - but is the market now receptive to a disruptive change in its ideas about the user value proposition of memory performance?

what's infinitely faster RAM?

I know that some of you who are reading this (and maybe it's You) are the kind of people who found companies or fund them (thanks for staying with me on this ) and when you noticed the words "infinitely faster" in my title above you wondered if it was some kind of late April Fool article. (No - I wrote something else.)

Infinitely? Really? - I know you can't put the value of infinity into a business plan (although it does come in useful sometimes for testing boundary assumptions about how markets will react to disruptive change). So let me explain my use of the term "infinitely faster RAM" in this article to mean "RAM that's maybe 20x or 100x faster than what you can get today. As measured by critical bottlenecks in applications. For my purposes here I'm saying that latency is the most critical fastness factor - and while acknowledging that there aren't generally accepted methods of defining what "faster memory" means - I think it's good enough for my argument below to assume that if the way the black box works behaves consistently as if the memory was X-times faster (or X-times sooner) than before - then that's a good enough understanding.

This also assumes we're on the same page - when it comes to agreement on - what is RAM? - which is a shifting subject I have written about before. For my purposes - if it behaves like RAM - and can transparently replace conventional RAM (chips, modules or boxes or markets) then that's good enough for now - without worrying about implementation details. I'm not going to speculate on the technology of the infinitely faster RAM - I'll leave that problem for someone else - (maybe You). In this article I'm posing the same kind of philosophical and business what-ifs which I did in earlier phases of the SSD and memoryfication markets - which asked - if we could get this new stuff - what new products and markets would we get? - and how would that change pre-existing markets.

No! to the infinitely fast one transistor memory cell. I'd like to make it clear I'm not interested here in the idea of so-called "ultra fast" transistors, memory cells and that ilk of research. As far as I'm concerned if you can't put many megabytes and preferably gigabytes of raw capacity into the infinitely faster RAM (at chip / module level) then it's not the kind of animal I'd talking about here.

some lessons from history - applications create markets and define acceptable latency

Upto the early 2000s the value propositions for different implementations of semiconductor RAM were graded by latency and power - and the order of precedence (DRAM, SRAM, SoC memory on chip - from slowest to fastest) hadn't changed since the dawning of their mutual market coexistence in the 1970s.

If you wanted bigger capacity - you chose DRAM. If you wanted faster latency at a board level of integration you chose SRAM which ran hotter and was smaller in capacity.If you wanted faster than that - there was no contest. It had to be SoC (usually in the form of RAM on a true ASIC or gate array but also latterly on FPGA).

At a board level - and system level - DRAM and SRAM reached their latency limits in about 1999 and haven't get any faster since.

It didn't matter so much in the early 2000s because enterprise processors weren't getting any faster either. And the shape of applications (users doing simple stuff on the internet ) meant that datacenters could get by with affordable technologies which offered higher densities and lower power (more users satisfied per box or watt or dollar) rather than users getting speed they didn't need and couldn't use. The computer industry didn't need faster memory. And when demand for more applications performance did grow - particularly in the early days of the cell phone market (and social networks) the enterprise SSD market took up the slack adequately - as there had been plenty of latency bottlenecks built around earlier generations of (rotating) storage.

Nowadays cell phones are coinage, spies and slot machines. And they've been joined by IoT. There's so much intelligence which can be gathered about the meaning of it all. But no memory or computing platforms fast enough to resolve everything which can imagined by the next master plan in a timely fashion.

memory world war 1 - flash versus DRAM - in enterprise storage

I guess the first time there was a serious challenge to the role of enterprise DRAM from another memory type in the acceleration space was in the early years of enterprise flash adoption (from around 2004). Which was fought out and soon won by flash arrays supplanting RAM SSDs.

If you'd asked most SSD people even as late as 2007 whether they really expected DRAM to be replaced by flash as the mainstream enterprise SSD based acceleration technology there were arguments which could be made for either. But (as we now know) by 2012 the RAM SSD market was effectively extinct. The principal reasons that a slower latency memory (flash) could and did replace a faster latency memory (DRAM) in an acceleration role were:-
  • Typical user installations needed more memory capacity than could be integrated by DRAM in a single box. The latency fabric from interfaces wrapped around these SSD assets negated the latency advantage of DRAM chips compared to flash chips. (The flash chips had much higher storage capacity per chip and required much less electrical power.)
  • Most of the easy acceleration advantages of enterprise SSDs came from read requests rather than writes. That's just the way that the legacy installed software base worked. That bias in the profile of memory R/W meant that the asymmetic R/W latency of flash chips - with reads being orders of magnitude faster than writes - was not a serious obstacle in adoption.
Those acceleration lessons - initially duelled out in Fibre Channel SAN rackmount boxes - were won by the time the PCIe SSD market got started. It showed that a faster memory could lose out to a slower memory in an acceleration focused role.

But that was storage... What does that tell us - if anything about different speeds of memory used as memory?

The early experiences (2014 to 2017) of tiered memory from the DIMM wars market - in which flash can be tiered with DRAM (using form factors as diverse as DIMMs, PCIe modules and even SATA arrays) is that there can be trade offs in big data applications whereby trading size of memory in the box can offset the native speed of raw memory (when the fastest memory is DRAM) for exactly the same reasons which pertained in flash storage accelerators. And the benefits of doing so have been mostly related to improvements in cost rather than any risk free overwhelming advantages in application latency. That's because the low hanging fruit of tiered flash speedup was mostly already harvested by the bottlenecks uncovered and bypassed by server based PCIe SSD adoption.

Here's one last look backwards at the lessons from history - before going on to speculate about the value of infinitely faster memory in the futre.

I think that if you could go back in time and take with you a warehouse of today's fastest and highest capacity DRAM chips - along with plug compatible adapters to retrofit them into past server and storage systems - then you wouldn't change the world of applications because most performance constrained servers in the past - already had the maximum amount of DRAM installed. And if they didn't - then there were so many bottlenecks built into interfaces and software that - even with an ideally configured modern memory array taken back in time and fitted as an upgrade - you would at best typically get a 2x or 3x speedup or - very often - get no speed up at all. That's why the early adoption of enterprise SSD accelerators was slow and problematic. There were too many problems baked into the ecosystem for any one new product to make enough of a change.

today's world of enteprise memoryfication and memory defined software

Today's computing market - where SSDs are everywhere and storage latency bands have a precise value and can be controlled - is better placed to consider and use faster memory systems. It's the only direction of travel to enable faster software.

The business needs are well understood. It's easy nowadays to see a direct link between faster decisions (based on diverse internet signals picked up and analyzed by real-time AI) and measurable economic outcomes. Also new business models and markets are being created by the application of heavy duty machine learning.

The software industry has had nearly a decade to become accustomed to thinking about having meaningful choices in latency - which are determined by different types and tiers of semiconductor devices. Most of the benefits of SSDs came when enough software changed to fit in with an SSD world. Those changes were hard to make because of decades of architectural lethargy in the leadup to the modern SSD era. The next stage of software change - towards more memoryfication - has already been underway - due to momentum and despite the lack of revolutionaty memory technologies to take hardware to the next level.

So if you could offer a GB scale memory chip with 20x or 100x lower latency than existing mainstream DRAM - there are companies who could do useful things with it.

simplistic valuation of much faster memory accelerator systems

The most obvious market sizing example for infinitely faster memory accelerator systems is their application in dealing with temporary data.

A simplistic way of looking at this is that overwhelmingly most of the data which enters processor space is temporary data. So if you have a memory accelerator which is 100x faster than the original memory which you used before to solve this type of problem - then provided that you don't run out of new data to feed into the machine and provided that the usable memory size for any single instance of the computation can fit into the memory space provided - then you need approximately 100x less machines to provide the same services.

The equivalence of speed and machines (one machine which is 100x faster being equivalent in capability to 100 slower machines) is similar to the SSD-CPU eqyuivalency model which helped to cost justify SSD accelerators in the early 2000s as server replacements rather than expensive $/TB storage.

However, the memory accelerator has a greater utility than this comparison might suggest on its own because the ability to solve problems sooner, and the ability to solve more complex problems for the first time within a shorter real-time period create new value propositions by making new algorithmic engines viable for new markets and applications.

This ability to create new markets is like the dynamic energy seen in the computer market in the 1980s and 1990s which began with microprocessors making computation cheaper but ended (around 1999) when limits were reached in how fast the GHz clock rate of any particular core could run. And the collective decision of server companies that fater was not necessary is partly to blame for the dumbing down of processor design architecture for nearly 20 years thereafter.

The good thing which emerged from that lack of investment in making commercial processors faster was that it created the fertile soil for the SSD acceleration market - which became the only game in town.

And now that there is a greater understanding of memory (and the interplay of roles and values between raw memory capacity and raw latency) and helped by an SSD rich ecosystem in which larger portions of any computing problem can be economically gathered into systems where the random access time can be arranged to be a handful of microseconds rather than tens of milliseconds - the creative juices of computing architecture have been turning to the much needed creation of new memoryfication compatible computing engines and new processor architectures.

That's what has been giving rise to the proliferation in recent years of commercial in-situ SSDs, processing in / near memory FPGA arrays and dedicated memory accelerators for machine learning and similar neural algorithms.

The early implementations which you can read about in the SSD news archives demonstrate 2 things.
  • The value of an Nx faster memory-compute accelerator can indeed be measured by at least the cost of all the previous traditional hardware which was needed to solve similar problems before. So 1 new PU (TPU etc) is indeed worth Nx conventional server CPU / GPU / when there is sufficient work to be within a particular problem shape universe. Their value is application specific. (Some workloads are accelerated better than others.)
  • Modern memory accelerators don't have to resemble either dumb memory or dumb processors. Provided that they can interoperate with conventional servers and infrastructure the new memory accelerators are best viewed as black boxes whose internal details may change and adapt (just like search engine algorithms) when more data suggests future areas of improvement.
This will creates an existential problem for makers of memory testers - because the future of high performance memory systems (where the money used to be) will become increasingly proprietary.

And as you can realize yourself this will create a cut-off point for manufacturers of high end server memory. Because high end memory systems will inevitably look more like a custom processor market.

And as for traditional processor makers the new memory accelerator systems don't care about and don't need their "instruction set" based backwards compatibilities and roadmaps - because the new ML / NN engine roadmaps (if they need any compatibility at all) will be "application" and "algorithm" based.... More like the kind of compatibilties you witness in successive generations of cloud APIs.

An interesting question is how many different types of memory accelerators the world needs and can support?

One view might be that the world doesn't need more than a handful (one for Google, Amazon, Apple, Baidu etc) because if the biggest benefit and visibility into design optimization only occurs at massive scale then those companies will each drive their own designs.

On the other hand if this is indeed the start of a new rennaissance in computing architecture - then you could argue that there will be the usual explosion of startups hoping to serve new markets created by the new ideas in architecture. (And there may be benefits of such new ideas which occur without being colocated in the cloud.

conclusion

Going back to my questions in the title...

Are we ready for infinitely faster RAM?

I think I've made a case for the answer being - Yes. More ready than we've ever been before.

And as to - what would it be worth?

My advice to founders of startups in infinitely faster memory accelerators is - don't sprinkle the number "infinity" about too much in your spreadsheets when guessing the market size or attached to the price which you think ideal customers would be prepared to pay. There are plenty of big numbers you can choose which are smaller and will still sound impressive without straining credulity.
..

..
storage search banner
....
Introducing Memory Defined Software? Yes seriously - these words are in the right order. This article invites you to think about a relatively new market for software which is strongly typed to new physical memory platforms and nvm-inside processors while unbound from the latency chained tyranny of memory which is virtualizable by storage.
Introducing Memory Defined Software
..
SSD ad - click for more info
..
The convenience of DWPD as a way of selecting SSDs for application roles meant it quickly gained widespread adoption in enterprise, cloud and embedded markets but DWPD has limitations too.
what's the state of DWPD?
..
There's a genuine problem for the SCM
(storage class memory) industry.
How to describe performance.
is it realistic to talk about memory IOPS?
controllernomics - is that even a real word?
..
The semiconductor memory business has toggled between under supply and over supply since the 1970s.
an SSD view of past, present and future boom bust cycles in the memory market
..
As you may have guessed I talk to a lot of companies which design SSDs and SSD controllers.

I also talk to people who design processors.
optimizing CPUs in the Post Modernist Era
..
Is more always better?
The ups and downs of capacitor hold up in 2.5" MIL flash SSDs
..
Some of the winners and losers from the memory shortages in 2017 were easy to spot. But there have been new opportunities created too.
miscellaneous consequences of the 2017 memory shortages

..
Despite many revolutionary changes in memory systems design and SSD adoption in the past decade we are still not at the stage where it's possible to predict and plot the next decade as merely an incremental set of refinements of what we've got now.
Are we there yet? - 40 years of thinking about SSDs

..
Older readers will remember that the question of whether memory chips might need passports and visas to travel from one part of the world to the other (and the related question of what kind of buyer reception these coach class chip tourists would get when they arrived) was for many decades the norm.
can memory chips be made in the wrong place?

..
Data recovery from DRAM?
I thought everyone knew that

..
the dividing line between storage and memory is more fluid than ever before
where are we heading with memory intensive systems?

..
Enterprise DRAM has the same latency now (or worse) than in 2000. The CPU-DRAM-HDD oligopoly optimized DRAM for a different set of assumptions than we have today in the post modern SSD era.
latency loving reasons for fading out DRAM

....
I said to a leading NVDIMM company... This may be a stupid question but... have you thought of supporting a RAMdisk emulation in your new "flash tiered as RAM" solution?
what could we learn?

....
In some ways the SSD market is like that lakeside village. It's not so long ago that no one even knew where it was.
Can you tell me the best way to get to SSD Street?

..
Many of the important and sometimes mysterious behavioral aspects of SSDs which predetermine their application limitations and usable market roles can only be understood when you look at how well the designer has dealt with managing the symmetries and asymmetries which are implicit in the underlying technologies which are contained within the SSD.
how fast can your SSD run backwards?

..
The enterprise SSD story...

why's the plot so complicated?

and was there ever a missed opportunity in the past to simplify it?
the elusive golden age of enterprise SSDs

..
Why do SSD revenue forecasts by enterprise vendors so often fail to anticipate crashes in demand from their existing customers?
meet Ken and the enterprise SSD software event horizon

....
the past (and future) of HDD vs SSD sophistry
How will the hard drive market fare...
in a solid state storage world?

....
Compared to EMC...

ours is better
can you take these AFA startups seriously?

..
Now we're seeing new trends in pricing flash arrays which don't even pretend that you can analyze and predict the benefits using technical models.
Exiting the Astrological Age of Enterprise SSD Pricing


..
A couple of years ago - if you were a big company wanting to get into the SSD market by an acquisition or strategic investment then a budget somewhere between $500 million and $1 billion would have seemed like plenty.
VCs in SSDs and storage


..
Adaptive dynamic refresh to improve ECC and power consumption, tiered memory latencies and some other ideas.
Are you ready to rethink RAM?


..
90% of the enterprise SSD companies which you know have no good reasons to survive.
market consolidation - why? how? when?


..
With hundreds of patents already pending in this topic there's a high probability that the SSD vendor won't give you the details. It's enough to get the general idea.
Adaptive flash R/W and DSP ECC IP in SSDs


..
SSD Market - Easy Entry Route #1 - Buy a Company which Already Makes SSDs. (And here's a list of who bought whom.)
3 Easy Ways to Enter the SSD Market


..
"You'd think... someone should know all the answers by now. "
what do enterprise SSD users want?


..
We can't afford NOT to be in the SSD market...
Hostage to the fortunes of SSD


..
Why buy SSDs?
6 user value propositions for buying SSDs


..
"Play it again Sam - as time goes by..."
the Problem with Write IOPS - in flash SSDs


..
Why can't SSD's true believers agree upon a single coherent vision for the future of solid state storage? (They never did.)
the SSD Heresies.


..
The predictability and calm, careful approach to new technology adoption in industrial SSDs was for a long time regarded as a virtue compared to other brash markets.
say farewell to reassuringly boring industrial SSDs


..
If you spend a lot of your time analyzing the performance characteristics and limitations of flash SSDs - this article will help you to easily predict the characteristics of any new SSDs you encounter - by leveraging the knowledge you already have.
flash SSD performance characteristics and limitations


..
The memory chip count ceiling around which the SSD controller IP is optimized - predetermines the efficiency of achieving system-wide goals like cost, performance and reliability.
size matters in SSD controller architecture


..
A popular fad in selling flash SSDs is life assurance and health care claims as in - my flash SSD controller care scheme is 100x better (than all the rest).
razzle dazzling flash SSD cell care


..
These are the "Editor Proven" cheerleaders and editorial meetings fixers of the storage and SSD industry.
who's who in SSD and storage PR?