this is the home page of
leading the way to the new storage frontier .....
image shows megabyte waving the winners trophy - there are over 200 SSD oems - which ones matter? - click to read article
top SSD companies ..
image shows software factory - click to see storage software directory
SSD software ....
storage history
SSD history ..
RAM image - click for RAM directory, articles and news
RAM really? ..

introducing Memory Defined Software

yes seriously - these words are in the right order.

A new market for software which is strongly typed to
new physical memory platforms and nvm-inside processors
while unbound from the tyranny of memory virtualizable by storage.

by Zsolt Kerekes, editor - February 14, 2018
The existence of a market which provides independent software support for solid state storage, SSDs and tiered memory for enterprise use has a relatively short history (of only about 7 to 10 years) compared to SSDs themselves. A tremendous amount has been accomplished in that time (as you can see in the SSD news archives) as the computing industry transitioned from initially shoe-horning SSDs into storage software models which had originally been written for hard drives, then optimizing system software related code to detect and bypass hardcoded rotating drive delay assumption workarounds which had been buried in every type of application software and then finally creating a new foundation of software primitives (NVMe) which began with the assumption that storage could be solid state.

So far, so good - and there have been some very talented companies which have revisited storage software assumptions from inside the drive, outside in the array, in the interfaces, in the associated stacks below, above, around and from every angle so that today you can realistically expect to get operational characteristics from solid state storage assets which are considerably better than whatever came before. And although there is still work to be done the storage industry can congratulate itself for collectively having done a good job despite at the start thrashing around in a shambolic state of disarray because the usual suspects didn't see the SSD avalanche coming down the hill.

And it's because of the ubiquity of solid state storage assets in the enterprise and the promise shown by early generations of memory fabrics that the next phase of revolution in software is now underway - which is how we get to memory defined software.

Let's backtrack briefly to hardware - because hardware always comes first. Insofar (by way of an apology in advance) that you can write all the software you like which pretends that you're running a new computer business game on a new computing platform - but you only start getting the benefits and the thrills by doing it on the new hardware. Just over a year ago on this home page I wrote a blog - after AFAs - what's the next box? (cloud adapted memory systems) which hinted at the kind of brew we should expect to see after the earlier pioneering percolators of NVDIMM wars had settled their territory disputes with alternative memories, tiered memory's place relative to tiered storage and PCIe's transition from unruly invader to settler in the territory of big memory fabrics which had upto not long before been dominated by fast versions of very old interfaces (IB and GbE). (And just to warn you that like previous land grabs - PCIe's position as a convenient gateway into big memory spaces - is no more sacrosanct that what came before - as Gen-Z may be a faster way to do things in future - although we have the lessons of Infiniband versus Ethernet to show that sometimes the new does not improve fast enough to displace what came before.)

You've had had plenty of warning that something is coming.

What do I mean by Memory Defined Software?

Simply this... Software which has been deliberately written to take advantage of the computational realities of memory with special characteristics in order to get behavior which was not possible before. The special characteristics may take many forms:-
  • nvm inside processors to enable instant reboot or context switches.
  • trusted persistent memory which is used an as application dependent fast look-up or code translation / computational acceleration / interpretive resource
  • memory which is bigger than traditional storage capacities - and which does not break when you hit it with zillions of memory intensive operations which require sub microsecond random read/modify/write/and move latencies.
  • memory with embedded in-memory processing capability (achieved by FPGA or ASIC).
At first some of the new memory defined software which is designed to run on new memory systems may resemble the functional characteristics of software which was developed to run on tiered memory systems which include SSDs and flash as RAM virtualization. But just as storage software evolved so that code written for flash environments could no longer run with acceptable performance on HDD arrays - so too the split between true memory defined software and software written for solid state storage installations will become quickly apparent. I think sooner rather than later - as the stimulus driving new memory code is coming from newer faster moving users who are more nimble in their adoption of new platforms which solve data dependent problems and doesn't carry the same decades long baggage and requirement for of backwards compatibility.

This is the first time you've seen me use the term "Memory Defined Software" in an article and it may feel not quite right at first as it juggles in your brain space for a relationship between SDS (Software Defined Storage) and the very similar sounding (but entirely different) Software Defined Memory. But people have been experimenting with software and architectures which are based around the Memory Defined Software concept for a while to solve embedded problems. In the next stage of the memoryfication of the enterprise I think it will become clearer that this is a new big market opportunity so I thought it's time to recognize it for what it is and give it its true name.

see also:- are we ready for infinitely faster RAM?

Controllernomics sets the limits to the quality of datasystems latency seen at the server motherboard level no matter how good the raw memory cell R/W times.
controllernomics - is that even a real word?

In some ways the SSD market is like that lakeside village. It's not so long ago that no one even knew where it was.
Can you tell me the best way to get to SSD Street?

storage search banner
SSD ad - click for more info
There's a genuine problem for the SCM (storage class memory) industry. How to describe performance.
is it realistic to talk about memory IOPS?
Getting acquainted with the needs of new big data apps
Editor:- February 13, 2017 - The nature of demands on storage and big memory systems has been changing.

A new slideshare - the new storage applications by Nisha Talagala, VP Engineering at Parallel Machines provides a strategic overview of the raw characteristics of dataflows which occur in new apps which involve advanced analytics, machine learning and deep learning.

It describes how these new trends differ to legacy enterprise storage patterns and discusses the convergence of RDBMS and analytics towards continuous streams of enquiries. And it shows why and where such new demands can only be satisfied by large capacity persistent memory systems.
slideshare by Parallel Systmes - memory and storage demands from new real time analytics and other new apps
Among the many interesting observations:-
  • Quality of service is different in the new apps.

    Random access is rare. Instead the data access patterns are heavily patterned and initiated by operations in some sort of array or matrix.
  • Correctness is hard to measure.

    And determinism and repeatability is not always present for streaming data. Because for example micro batch processing can produce different results depending on arrival time versus event time. (Computing the right answer too late is the wrong answer.)
Nisha concludes "Opportunities exist to significantly improve storage and memory for these use cases by understanding and exploiting their priorities and non-priorities for data." the article

SSD software news
where are we heading with memory intensive systems?
You can feel the Post Modernist Era of SSD in the air everywhere. What does that mean for CPUs?
optimizing CPUs for use with SSDs in the Post Modernist Era of SSD and Memory Systems
GridGain Systems - an example of Memory Defined Software
A traditional example of Memory Defined Software (and one of the easiest in my list of examples to understand) is the in-memory computing software solution set from GridGain Systems. In its earliest implementations GridGain's products could be regarded as a memory resident relational database. But by 2018 GridGain's solutions had expanded their ambitions and reach and could support multi node memory, cluster snapshot, ACID transaction support, backup and persistence.

Moving beyond traditional SQL in memory architectures GridGain has comfortably expanded to interoperate with cloud based architectures and IoT.
Micron's ACS - viewed as Memory Defined Software
An outer limits example of Memory Defined Software (but one which has profound promise for changing the boundaries of compute platforms) is the work being done by Micron to create development platforms for its in-situ FGA inside memory array accelerators ACS datasheet (pdf).

The complexity of designing useful accelerators for new applications using FPGAs which can leverage the dataflow benefits of integrating the logic into offload intelligent memory is a large scale problem which crosses many divides of competence. Micron has been working with machine learning tool companies towards the holy grail of recompiling the nitty gritty elements of big data problems into custom engines and architecture which can be created in months rather than years of iterative design.

This effort was outlined in a blog Why Memory Matters in Machine Learning for IoT (March 2018) - by Brad Spiers - Principal Solutions Architect, Advanced Storage at Micron.

the dividing line between storage and memory is more fluid than ever before
where are we heading with memory intensive systems?