Are you ready to rethink RAM? The revolution in use-case-aware
intelligent flash could cross over into new enterprise DRAM architecture.
editor - April 2, 2014
In recent years in the
flash memory market,
we've seen SSD controller
designers revisit and challenge the fundamental assumptions surrounding -
what's the best way to interact with the raw memory?
And instead of
just leaving critical timing parameters "as set" in the factory by "those
who know best" (the preset magic numbers which are designed to satisfy
worst-case usage within the memory population as a whole) - SSD designers have
leveraged the idea that "local knowledge (in a systems context) is better"
- to design new
controller schemes which change the raw capabilities of memory arrays - to
enable faster speed, or lower power, or better reliability - or a combination
of desirable features.
But what about designers in the
from 3D, packaging and interplane connection techniques - is there any similar
revolutionary thinking going on in the RAM market?
some random reading I did recently - my attention was drawn to a
collection of papers
delivered in August 2013 at a conference called MemCon.
I hadn't read these
before - partly because I was on vacation at the time - and when that finished -
I was too busy catching up with digesting the new ideas which had been
channeled through the SSD industry's premier event - the Flash Memory Summit - which
took place shortly after MemCon.
If you wondering why I say "FMS
is the SSD industry's premier event" - then take a look at
many times it has been mentioned in the same breath as an advanced SSD
concept on this site alone.
In 2014 - the relative timings of these 2
events has been adjusted to create a gap which is months rather than days -
so that those poor mortals who attend both - get a chance to recuperate.
new thinking in RAM architecture
For me today - the most interesting of
all the RAM related papers on the MemCon site in my reading this week - was
one which explored in a modern SSD perspective - the idea of adapting the
refresh rates in DRAM to leverage the difference between "worst case"
and "good enough" timings. In his presentation -
Scaling - a Systems Architecture Perspective (pdf) -
Onur Mutlu, Assistant
Professor Electrical and Computer Engineering -
Carnegie Mellon University
called this "Retention-Aware DRAM Refresh".
If you only like
to learn one new thing a month - that in itself seems like a good enough place
to stop - but it's just one of the warmup steps for the intensive workout
If you think you may up to that challenge - take a
look at Onur Mutlu's accompanying paper -
Scaling: A Systems Architecture Perspective (pdf) - which among other things
- proposes putting SSD-like thinking into the design DRAM and moving away from
the idea of treating "DRAM as a passive slave device."
paper is rich with ideas such as:-
- tiering within the DRAM (using 2 level latency-segmented bitlines)
And here's a quote from
Onur Mutlu's paper which I think resonates with the enterprise SSD
- moving blocks of data within the RAM - without using the external bus -
with a new design and topology of sense amps
"Our past work showed that application-unaware
design of memory controllers, and in particular memory scheduling algorithms,
leads to uncontrolled interference of applications in the memory system. Such
uncontrolled interference can lead to denial of service to some applications,
low system performance, and an inability to satisfy performance requirements,
which makes the system uncontrollable and unpredictable."
will we see more flash-controller-like functions inside future DRAMs?
depends on the system cost-benefits. (And whether such schemes can be
implemented in commercially scaled semiconductor layouts - rather than the kind
of conceptual lines drawn to connect virtual blocks on a whiteboard.)
for embedded controllers is already an integral part of the
Hybrid Memory Cube (launched in
October 2011) -
but Carnegie Mellon's ideas about new memory designs cross the frontier line
which artifically separates different chips even within an HMC architecture.
There can sometimes be money to be made from some of these
blue sky academic research ideas.
Yesterday for example (April 1,
2014) - Marvell
that a US court this week determined that the company should pay $1.54
billion to Carnegie Mellon University for alledgedly infringing patents related
to hard drive patents. ...read
more - Whatever the final outcome of any appeals process - it's keeping
some smart lawyers in work.
SSD controller chips
RAM news and articles
where are we
heading with memory intensive systems?
|Datawise every local
container box will be memory. Beyond that the slower / good-enough latency
role traditionally occupied by hard drives will be reserved exclusively
for the cloud. |
|We're already seeing signs
of clear fragmentation in the memory fabric market...
|re A3CUBE.... (SSD
news January 10, 2017)|
|"All the calculations
which have traditionally been done related to storage capacity and performance
give you the wrong expectations of what you can expect to get from memory
systems enhanced architecture - if you can find a way to embed data packing
transparently into the memory system."|
companies which made me stop and think in 2016|
|big idea #3 - retiring and
retiering enterprise DRAM |
which includes a new value proposition for
enterprise flash SSDs (flash as RAM) and presages a rebalancing of server
memories - DRAM will shrink as a percentage of the physical RAM - which will
also make it easier for emerging alternative memory types to be adopted by
hardware architects and by systems software too.
|What were the big
SSD ideas of 2015? |
|a guide to data
compression techniques and where to use them for designers of SSDs and memory
May 26, 2015 - Inside the
SSD controller brain
the compressibility of data is one of the tools which can go into the mix of
optimizing performance, endurance and competitive cost. |
A recent paper
Survey Of Architectural Approaches for Data Compression in Cache and Main Memory
Systems by Sparsh Mittal
and Jeffrey S. Vetter in
IEEE Transactions on Parallel and
Distributed Systems - reviews the published techniques available and places
their relevance in the context of real and future memory types and applications.
The survey covers applications from embedded systems upto
In addition to being useful resource directory of
related papers the article gives you a brief description of many compression
techniques, where you might use them and what benefits you might expect.
list of articles and
books by Sparsh Mittal which among other things covers caching
techniques, reliability impacts and energy saving possibilities in a wide
range of server architectures.