|
Are you ready to rethink RAM? The revolution in use-case-aware
intelligent flash could cross over into new enterprise DRAM architecture.
by
Zsolt Kerekes,
editor - April 2, 2014
In recent years in the
flash memory market,
we've seen SSD controller
designers revisit and challenge the fundamental assumptions surrounding -
what's the best way to interact with the raw memory?
And instead of
just leaving critical timing parameters "as set" in the factory by "those
who know best" (the preset magic numbers which are designed to satisfy
worst-case usage within the memory population as a whole) - SSD designers have
leveraged the idea that "local knowledge (in a systems context) is better"
- to design new
adaptive R/W
controller schemes which change the raw capabilities of memory arrays - to
enable faster speed, or lower power, or better reliability - or a combination
of desirable features.
But what about designers in the
RAM market?
Apart
from 3D, packaging and interplane connection techniques - is there any similar
revolutionary thinking going on in the RAM market?
Yes
In
some random reading I did recently - my attention was drawn to a
collection of papers
delivered in August 2013 at a conference called MemCon.
I hadn't read these
before - partly because I was on vacation at the time - and when that finished -
I was too busy catching up with digesting the new ideas which had been
channeled through the SSD industry's premier event - the Flash Memory Summit - which
took place shortly after MemCon.
If you wondering why I say "FMS
is the SSD industry's premier event" - then take a look at
how
many times it has been mentioned in the same breath as an advanced SSD
concept on this site alone.
In 2014 - the relative timings of these 2
events has been adjusted to create a gap which is months rather than days -
so that those poor mortals who attend both - get a chance to recuperate.
the
new thinking in RAM architecture
For me today - the most interesting of
all the RAM related papers on the MemCon site in my reading this week - was
one which explored in a modern SSD perspective - the idea of adapting the
refresh rates in DRAM to leverage the difference between "worst case"
and "good enough" timings. In his presentation -
Memory
Scaling - a Systems Architecture Perspective (pdf) -
Onur Mutlu, Assistant
Professor Electrical and Computer Engineering -
Carnegie Mellon University
called this "Retention-Aware DRAM Refresh".
If you only like
to learn one new thing a month - that in itself seems like a good enough place
to stop - but it's just one of the warmup steps for the intensive workout
which follows.
If you think you may up to that challenge - take a
look at Onur Mutlu's accompanying paper -
Memory
Scaling: A Systems Architecture Perspective (pdf) - which among other things
- proposes putting SSD-like thinking into the design DRAM and moving away from
the idea of treating "DRAM as a passive slave device."
This
paper is rich with ideas such as:-
- tiering within the DRAM (using 2 level latency-segmented bitlines)
- moving blocks of data within the RAM - without using the external bus -
with a new design and topology of sense amps
And here's a quote from
Onur Mutlu's paper which I think resonates with the enterprise SSD
experience:-
"Our past work showed that application-unaware
design of memory controllers, and in particular memory scheduling algorithms,
leads to uncontrolled interference of applications in the memory system. Such
uncontrolled interference can lead to denial of service to some applications,
low system performance, and an inability to satisfy performance requirements,
which makes the system uncontrollable and unpredictable."
So -
will we see more flash-controller-like functions inside future DRAMs?
That
depends on the system cost-benefits. (And whether such schemes can be
implemented in commercially scaled semiconductor layouts - rather than the kind
of conceptual lines drawn to connect virtual blocks on a whiteboard.)
Support
for embedded controllers is already an integral part of the
Hybrid Memory Cube (launched in
October 2011) -
but Carnegie Mellon's ideas about new memory designs cross the frontier line
which artifically separates different chips even within an HMC architecture.
where's
the money?
There can sometimes be money to be made from some of these
blue sky academic research ideas.
Yesterday for example (April 1,
2014) - Marvell
Semiconductor
announced
that a US court this week determined that the company should pay $1.54
billion to Carnegie Mellon University for alledgedly infringing patents related
to hard drive patents. ...read
more - Whatever the final outcome of any appeals process - it's keeping
some smart lawyers in work. | | |
.... |
 |
.. |
RAM SSDs
Fastest SSDs
SSD controller chips
RAM news and articles
Memory Channel
Storage SSDs
where are we
heading with memory intensive systems? |
.. |
 |
.. |
.. |
|
.. |

|
. |
| |
.... |
|
.. |
We're already seeing signs
of clear fragmentation in the memory fabric market...
|
re A3CUBE.... (SSD
news January 10, 2017) | | |
.. |
"All the calculations
which have traditionally been done related to storage capacity and performance
give you the wrong expectations of what you can expect to get from memory
systems enhanced architecture - if you can find a way to embed data packing
transparently into the memory system." |
4 shining
companies which made me stop and think in 2016 | | |
.. |
 |
.. |
|
.. |
big idea #3 - retiring and
retiering enterprise DRAM
which includes a new value proposition for
enterprise flash SSDs (flash as RAM) and presages a rebalancing of server
memories - DRAM will shrink as a percentage of the physical RAM - which will
also make it easier for emerging alternative memory types to be adopted by
hardware architects and by systems software too. |
What were the big
SSD ideas of 2015? | | |
.. |
a guide to data
compression techniques and where to use them for designers of SSDs and memory
systems |
Editor:-
May 26, 2015 - Inside the
SSD controller brain
the compressibility of data is one of the tools which can go into the mix of
optimizing performance, endurance and competitive cost.
A recent paper
-
A
Survey Of Architectural Approaches for Data Compression in Cache and Main Memory
Systems by Sparsh Mittal
and Jeffrey S. Vetter in
IEEE Transactions on Parallel and
Distributed Systems - reviews the published techniques available and places
their relevance in the context of real and future memory types and applications.
The survey covers applications from embedded systems upto
supercomputers.
In addition to being useful resource directory of
related papers the article gives you a brief description of many compression
techniques, where you might use them and what benefits you might expect.
See
also:-
list of articles and
books by Sparsh Mittal which among other things covers caching
techniques, reliability impacts and energy saving possibilities in a wide
range of server architectures. | | |
.... |
|
.. |
|
.. |
in-situ processing in
flash array obviates need for big RAM in big data - MIT research findings |
Editor:- July 14, 2015 - Flash SSDs with
in-situ processing in
regular RAM
cached servers can deliver nearly the same apps performance as fat RAM
servers (but at much lower cost and lower electrical power).
That's
one inference from a recent story -
Cutting
cost and power consumption for big data - in MIT news - which summarized a
research paper at ISCA
2015 - BlueDBM: An
Appliance for Big Data Analytics
.
Part of the system architecture in the research included a
network of FPGAs which routed data to the flash arrays and offloaded some of
the application specific processing.
This is not a replacement for
DRAM said Professor Arvind whose group at
MIT performed the new work. But there may be
many applications that can take advantage of this new style of architecture.
Which companies recognize: Everybodys experimenting with different aspects of
flash. Were just trying to establish another point in the design space. ...read
the article
| | |
.. |
|
| |