click to visit home page
leading the way to the new storage frontier .....
the fastest SSDs - click to read article
the fastest SSDs ..
memory channel storage
DDR3 flash SSDs ...
SSD symmetries article
SSD symmetries ..
SSDs over 163  current & past oems profiled
SSD news ..
image shows Megabyte sitting in a treasure cache -  click to see the  SSD Buyers Guide
SSD buyers guide ..
click to read article - sugaring  MLC for the enterprise
enterprise flash history ....
image shows Megabye the mouse reading scroll - click to see the top 50 solid state drive articles
more SSD articles


by Zsolt Kerekes, editor

I used RAM SSDs (both SCSI based and native backplane bus based) back in the 1980s to accelerate Oracle on servers but most often in ultrafast embedded real-time platforms.

And I've been writing about RAM SSDs in my enterprise buyers guides for over 20 years. So I've seen this market in many different phases of its life.

The enterprise SSD market - which used to be 100% RAM based over 10 years ago - is now overwhelmingly dominated by flash. You can read a summary of how this tranformation took place in the evolution of enterprise flash - a 10 year history

Reports from SSD vendors suggest that RAM SSDs and RAM cache in flash SSDs probably account for no more than 1 to 2% of all enterprise SSD capacity.

But RAM SSDs are not entirely dead, in new architecture design - and will remain an essential way to unbottleneck large clusters of fast flash SSDs.

For example it's a shared multiported ultrafast RAM replication resource which is at the heart of A3CUBE's PCIe fabric - RONNIEE Express (launched in February 2014) which offers a new way to leverage huge populations of PCIe SSDs and servers.

Having said that - the ratio of real RAM - even inside servers - is under attack and likely to shrink due to a new type of flash SSD - memory channel SSDs - which you can read more about in the special directory here on
SSD ad - click for more info
When flash SSDs aren't fast enough!
RAM based SSDs are the original type of solid state disk and have been around for decades.

They rely on batteries to retain data when power is lost. Most models also include internal hard disk drives to which data is saved under battery power, so that data is not lost when the battery runs down. This hybrid technology means that RAM based SSDs are more bulky than flash counterparts and RAM SSDs are unable to operate in the same range of hostile environments.

RAM based SSDs are mostly used in enterprise server speedup applications. The fastest RAM SSDs are faster than the fastest flash SSDs. But for many server speedup applications flash SSDs are fast enough.

Unlike flash SSDs, RAM based SSDs never had restrictions on the number of write cycles. That made them more popular in enterprise acceleration applications in the past. But write endurance problems may be a thing of the past for flash.

Like hard disks - RAM SSDs have symmetric read/write IOPS. That's another big difference between RAM and flash SSDs.

The fastest flash SSDs available in 2009 had achieved parity between random read and write IOPS.

But that's not how transaction based applications work. The important differentiator here is repeat again write IOPS. If you compare that between RAM and flash based SSDs - the RAM SSDs are upto 100x faster - even when the datasheets suggest they look the same.

On the other hand - in some enterprise applications - like IPTV servers - the random write IOPS rarely repeats in the same memory space during milli-second timeframes - and in these video server apps - flash really does perform as well as RAM - and is much cheaper.

Latency figures quoted by many flash SSD products can also look very similar to those for RAM SSDs. But low random write latency doesn't mean that the data has actually hit the flash media yet - as you'll find if you try to read back the data and rewrite to the same block.

There are also some non volatile memory products such as PRAM, FRAM and RRAM which are replacing flash in industrial applications - and which already offer 1 to 1 read/write performance. But their capacity is 2 orders of magnitude too low to be of use in server applications.

RAM SSDs cost about 3x as much as SLC flash SSDs for similar capacity in FC SAN rackmount systems - (based on pricing data 2011.)

The ideal choice of SSD depends on the specific server and application environment and cost / benefit analysis.

Not everyone needs or can afford the fastest SSDs. Some environments do. Others don't.

Identifying the right choice of SSD in the right place is a complex decision - which requires a high degree of SSD education and trust in the vendor.

More articles about the problems and solutions related to accelerating enterprise server apps can be seen on the SSD ASAPs page.
how fast can your SSD run backwards?
Editor:- April 20, 2012 - today published a new article which looks at the 11 key symmetries in SSD design.

SSDs are complex devices and there's a lot of mysterious behavior which isn't fully revealed by benchmarks and vendor's product datasheets and whitepapers. Underlying all the important aspects of SSD behavior are asymmetries which arise from the intrinsic technologies and architecture inside the SSD.

Which symmetries are most important in an SSD? - That depends on your application. But knowing that these symmetries exist, what they are, and judging how your selected SSD compares will give you new insights into SSD performance, cost and reliability.

There's no such thing as - the perfect SSD - existing in the market today - but the SSD symmetry list helps you to understand where any SSD in any memory technology stands relative to the ideal. And it explains why deviations from the ideal can matter.
SSD symmetries article The new article unifies all SSD architectures and technologies in a simple to understand way. to read the article
RAM vs flash SSDs decision tipping point
Editor:- in December 2010 - I interviewed Jamon Bowen, Director of Sales Engineering for Texas Memory Systems and asked him about the use of SSDs in financial applications like banks and traders - a market which he said accounts for most of their RAM SSD sales.

The company which started in RAM SSDs over 30 years ago - now sells more flash SSDs than RAM SSDs (even though the product brand for both types of SSD is confusingly called RamSan.) Bowen said that flash is 70% of their business.

Jamon Bowen said that in many bank applications RAM SSDs are actually cheaper than flash - because of the small size of the data. TMS still sell a lot of 16GB RAM SSDs.

Production bank systems are typically shared by many hosts and get a lot of write IOPS / capacity. To achieve the same reliability and latency with flash would require over provisioning which would drive the cost up.

He suggested a simple rule of thumb for intensive IOPS bank SSDs on the SAN
  • < 128GB capacity - RAM SSDs cheaper
  • 128GB to 4TB capacity - middle ground could be either - or determined by other constraints
  • > 4TB - flash SSDs cheaper
Jamon Bowen said that the analysis side of operations in banks is different. That tends to have much larger data sets and is more read than write intensive. In these apps - flash SSDs are usually more economic.
a classic ad from SSD market history
Curtis solid state disks
Clipper II
5 1/4" SCSI Solid State Disks
from Curtis

(ad appeared on
in January 2000)

storage search banner

the Top SSD Companies in Q1 2014

Editor:- April 30, 2014 - today published the 28th quarterly edition of the Top SSD Companies List based on metrics in Q1 2014.

Newcomers to the list included Maxta and A3CUBE and there were significant movements in the top 10 companies. the article

are you ready to adapt to new ways of thinking about enterprise RAM?

Editor:- April 2, 2014 - Are you ready to rethink what you think about enterprise DRAM?

The revolution in in use-case-aware intelligent flash could crossover to DRAM. These ideas are brought together in the new home page blog on the article

A3CUBE unveils PCIe memory fabric for
10,000 node-class PCIe SSD architectures

Editor:- February 25, 2014 - PCIe SSDs can now access a true PCIe connected shared memory fabric designed by A3CUBE - which exited stealth today with the launch of their remote shared broadcast memory network - RONNIEE Express - which provides 700nS (nanoseconds) raw latency (4 byte message) and which enables message throughput - via standard PCIe - which is 8x better than InfiniBand.

Editor's comments:- I spoke to the company's luminaries recently - who say they intend to make this an affordable mainstream solution.

The idea of using PCIe as a fabric to share data at low latency and with fast throughput across a set of closely located servers isn't a new one.

The world's leading PCIe chipmaker PLX started educating designers and systems architects about these possibilities a few years ago - as a way to elegantly answer a new set of scalability problems caused by the increasing adoption of PCIe SSDs. These questions include:-
  • how do you make this expensive resource available to more servers?
  • how do you enable a simple to implement failover mechanism - so that data remains accessible in the event of either a server or SSD fault?
In the least year or so - we've seen most of the leading vendors in the enterprise PCIe SSD market leverage some of the new features in PCIe chips - to implement high availability SSDs with low latency.

But although there are many ways of doing this - the details are different for each vendor.

And - until now - if you wanted to share data at PCIe-like latency across a bunch of PCIe SSDs from different companies - located in different boxes - the simplest way to do that was to bridge across ethernet or infiniband. - And even though it has been technically possible with standard software packages - the integration, education and support issues - compared to legacy SAN or NAS techniques would be extremely daunting.

That's where A3CUBE comes into the picture. Their concept is to provide a box which enables any supported PCIe device to connect to any other - at low latency and with high throughput - in an architecture which scales to many thousands of nodes.

At the heart of this is a shared broadcast memory window - of 128Mbytes - which can be viewed simultaneously by any of the attached ports.

If you've ever used shared remote memory in a supercomputer style of system design at any time in the past 20 years or so - you'll know that the critical thing is how the latency grows as you add more ports. So that was one of the questions I asked.

Here's what I was told - "The latency is related to the dimension of the packet for example: In a real application using a range of 64-256 bytes of messages the 3D torus latency doubled after 1,000 nodes. With larger packets, the number of nodes to double the latency becomes grater. But the real point is that the latency of a simple p2p in a standard 10GE is reached after 29,000 nodes.

"A more clear example of the scalability of the system is this. Imagine that an application experiences a max latency of 4 us with 64 nodes, now we want to scale to 1,000 nodes the max latency that the same application experience will became 4.9 us. 0.9 us of extra latency for 936 more nodes."

Editor again:- Those are very impressive examples - and demonstrates that the "scalability" is inherent in the original product design.

A3CUBE didn't want to say publicly what the costs of the nodes and the box are at this stage. But they answered the question a different way.

Their aim is to price the architecture so that it works out cheaper to run than the legacy (pre-PCIe SSD era) alternatives - and they're hoping that server oems and fast SSD oems will find A3CUBE's way of doing this PCIe fabric scalability stuff - is the ideal way they want to go.

There's a lot more we have to learn - and a lot of testing to be done and software to be written - but for users whose nightmare questions have been - how do I easily scale up to a 10,000 PCIe SSD resource - and when I've got it - how can I simplify changing suppliers? - there's a new safety net being woven. Here are the essential details (pdf).

McObject shows in-memory database resilience in NVDIMM

Editor:- October 9, 2013 - what happens if you pull out the power plug during intensive in-memory database transactions? For those who don't want to rely on batteries - but who also need ultimate speed - this is more than just an academic question.

Recently on these pages I've been talking a lot about a new type of memory channel SSDs which are hoping to break into the application space owned by PCIe SSDs. But another solution in this area has always been DRAM with power fail features which save data to flash in the event of sudden power loss. (The only disadvantages being that the memory density and cost are constrained by the nature of DRAM.)

McObject (whose products include in-memory database software) yesterday published the results of benchmarks using AGIGA Tech's NVDIMM in which they did some unthinkable things which you would never wish to try out for yourself - like rebooting the server while it was running... The result? Everything was OK.

"The idea that there must be a tradeoff between performance and persistence/durability has become so ingrained in the database field that it is rarely questioned. This test shows that mission critical applications needn't accept latency as the price for recoverability. Developers working in a variety of application categories will view this as a breakthrough" said Steve Graves, CEO McObject.

Here's a quote from the whitepaper - Database Persistence, Without The Performance Penalty (pdf) - "In these tests eXtremeDB's inserts and updates with AGIGA's NVDIMM for main memory storage were 2x as fast as using the same IMDS with transaction logging, and approximately 5x faster for database updates (and this with the transaction log stored on RAM-disk, a solution that is (even) faster than storing the log on an SSD). The possibility of gaining so much speed while giving up nothing in terms of data durability or recoverability makes the IMDS with NVDIMM combination impossible to ignore in many application categories, including capital markets, telecom/networking, aerospace and industrial systems."

Editor's comments:- last year McObject published a paper showing the benefits of using PCIe SSDs for the transaction log too. They seem to have all angles covered for mission critical ultrafast databases that can be squeezed into memory.


Editor:- August 8, 2013 - SMART Storage Systems today announced it has begun sampling the first memory channel SSDs compatible with the interface and reference architecture created by Diablo Technologies.

SMART's first generation enterprise ULLtraDIMM SSD (ULL = ultra-low latency) can be deployed via any existing DIMM slot and provides 200GB or 400GB of enterprise class flash SSD memory with upto 1GB/s and 760MB/s of sustained read/write performance, with 5 microseconds write latency. Throughput, IOPS and memory capacity all scale with the number of ULLtraDIMM deployed in each server.

ultra low latency memory channel SSD

Editor's comments:- With the current design -only one DIMM slot in each server has to be reserved for conventional DRAM. Apart from that constraint any DIMM slot can be used for either flash or DRAM as deemed necessary for the application.

For more about the potential of this technology, the thinking behind it and the competitive landscape relative to PCIe SSDs etc see my earlier articles on the Memory Channel SSDs page.

in memory database even better with FIO's flash

Editor:- November 20, 2012 - McObject recently released new benchmark results which indicate that the in-memory database company is not so unfriendly to flash SSDs as you may have thought from reading earlier company positioning papers.

It seems that a software product - which was originally designed for the DRAM-HDD world - is a good fit in the flash SSD world too - if you have the right scale of data and the right SSD. more

Micron sources power holdup technology for NVDIMMs

Editor:- November 14, 2012 - Micron has signed an agreement with AgigA Tech to collaborate to develop and offer nonvolatile DIMM (NVDIMM) products using AgigA's PowerGEM (sudden power loss controller and holdup modules).

STEC discloses RAM vs flash SSD revenues

Editor:- November 7, 2012 - among other things STEC revealed yesterday in its earnings conference call that RAM SSDs were approximately 4% of its revenues in the recent quarter.

AMD will rebrand Dataram's RAMDisk software

Editor:- September 6, 2012 - Dataram today announced it will develop a version of its RAMDisk software which will be rebranded by AMD in Q4 under the name of Radeon RAMDisk and will target Windows market gaming enthusiasts seeking (upto 5x) faster performance when used with enough memory. See also:- SSD software

Kaminario recommends you read SSD Symmetries article

Editor:- June 15, 2012 - I accidentally discovered today that earlier this week Gareth Taube, VP of Marketing at Kaminario published a new blog in which he recommends my article about SSD Symmetries.

Gareth says "Flexibility, such as being able to integrate multiple memory technologies into a single box (like Kaminario's K2-H), is going to be increasingly important to customers who want efficiency and customization options. This is especially true because there are many memory innovations coming on the near horizon." Gareth's blog

Editor's comments:- when I was writing the symmetry article one of the things I had in mind to do was to put more examples in it. Then I realized that having lots of examples would simply make the article unreadable.

One of the examples I was going to use for good roadmap symmetry (but then forgot to put anywhere) was in fact Kaminario - because they can leverage off whatever Fusion-io does with flash (or other nv memory) and furthermore Kaminario can also leverage off whatever server makers do with CPUs and RAM. Roadmap symmetry is a long term consideration - important for big users who don't like supplier churn and important for VCs and investors too.

...Later:- I'm glad I wrote that bit about "roadmap symmetry" - because by a spooky coincidence - 3 days later we got the news that Kaminario's investors still love what they do.

June 18, 2012 - Kaminario today announced it has secured a $25 million series D round of funding, bringing its total funding to $65 million.

sharpen your SSD R/W grid latency weapons to 5µS

Editor:- May 9, 2012 - Kove has published some new record latency numbers for its fast RAM SSD - the XPD L2 - which has achieved continuous and sustained 5 microsecond random storage read and write when connected via 40Gb/s InfiniBand adapters from Mellanox .

Kove's system has good R/W symmetry which the company says - is not subject to periodic performance jitter or "periodicity". Even under constantly changing disk utilization, it delivers uniform, predictable, and deterministic performance.

"The Kove XPD L2... allows high performance applications to use storage as a weapon rather than accept it as a handicap," said Kove's CEO, John Overton. "We are pleased to set a new bar height for storage latency."

STEC's RAM SSDs percentage?

Editor:- February 28, 2012 - "Our DRAM-related products accounted for 3% of revenue" said Raymond Cook, CFO, STEC - in the company's Q4 2011 - earnings conference.

Fusion-io's 1 billion IOPS demo narrows latency gap between flash and RAM SSDs

Editor:- January 6, 2012 - in a historic demo yesterday showing the capabilities of its latency reducing Auto Commit Memory (ACM) extension Fusion-io announced it had exceeded 1 billion IOPS (64 byte data packets) in a configuration which used 8 HP servers each configured with 8x ioDrive2 Duo PCIe SSDs.

Editor's comments:- although we're used to thinking about SSD IOPS in terms of bigger packets - such as 4kB - instead of the very small packet size in this demo - IOPS is simply a convenient and not always reliable way of comparing the relative performance of storage products.

In real life - users don't have a choice of what size the R/W operations are which take place in their apps. They occur at all sizes (mostly smaller than 4kB) and when these R/W operations take place in traditional storage architecture systems - which internally impose their own restrictions on the minimum size of atomic data packets - that's where latencies and performance become discontinuous compared to the value of the data update due to amplification and packetization effects.

In my view - the important thing about this demo - is that the same PCIe SSD product which can perform useful work as a storage device - can also be deployed as a super scaler memory device - when it is running the appropriate software.

The difference is that with traditional storage software - you might expect that a 64x PCIe SSD system might hit 64M IOPS or some similar figure (regardless of the small size of the data packet). Instead the demo shows that apps developers can get 16x more performance in small R/W transactions if they are willing to invest the effort to make their apps work with FIO's new APIs.

It's that order of magnitude difference which is the attraction for some markets - because it closes the gap in performance between RAM SSDs and flash SSDs. And when you can run apps 10x faster than other flash competitors at the same price - or support 10x bigger data sets than competitors using RAM SSDs - that create new markets. See also:- Record Breaking Storage

new article on RAM SSDs

Editor:- April 22, 2011 - Long Live RAM SSD is a new article by Woody Hutsell which reflects on how the RAM SSD market - which many observers once believed would be killed by flash - has got a great future.

the Fastest SSDs
the Top 10 SSD Companies
RAM Cache Ratios in flash SSDs
Why I Tire of - "Tier Zero Storage"
RAM versus Flash SSDs - which is Best?
the new way of looking at Enterprise SSDs
Introducing the concept of RAMClouds (pdf)
when the SSD brand sends the wrong signal - RamSan and Dataram
Surviving SSD sudden power loss
Why should you care what happens in an SSD when the power goes down?

This important design feature - which barely rates a mention in most SSD datasheets and press releases - has a strong impact on SSD data integrity and operational reliability.

This article will help you understand why some SSDs which (work perfectly well in one type of application) might fail in others... even when the changes in the operational environment appear to be negligible.
image shows Megabyte's hot air balloon - click to read the article SSD power down architectures and acharacteristics If you thought endurance was the end of the SSD reliability story - think again. the article
RAM based SSDs - image shows Megabyte ramming his way into the cheese store
sometimes you just can't wait
SSD ad - click for more info
RAM based SSD makers list
ACARD Technology


Avere Systems




Density Dynamics


Dynamic Solutions International



Real Ram Disk

Solid Access Technologies

Solid Data Systems


Texas Memory Systems

Third I/O


Violin Memory

There used to be many other RAM SSD companies at earlier stages in SSD market history - for example Cenatek, Imperial Technology and Platypus Technology - which are no longer in business.
"Across the whole enterprise - a single petabyte of SSD with new software could replace 10 to 50 petabytes of raw legacy HDD storage and still enable all the apps to run much faster..."
the enterprise SSD software event horizon
SSD ad - click for more info
the Problem with Write IOPS in flash SSDs
Random "write IOPS" in many of the fastest flash SSDs are now similar to "read IOPS" - implying a performance symmetry which was once believed to be impossible.

So why are flash SSD IOPS such a poor predictor of application performance? And why are users still buying RAM SSDs which cost an order of magnitude more than SLC? (let alone MLC) - even when the IOPS specs look superficially similar?

This article tells you why the specs got faster - but the applications didn't.
the problem with flash SSD  write IOPS And why competing SSDs with apparently identical benchmark results can perform completely differently. the article

STORAGEsearch is published by ACSL