click to visit home page
leading the way to the new storage frontier .....
RAM news ...
the fastest SSDs - click to read article
the fastest SSDs ..
SSD symmetries article
SSD symmetries ..
SSDs over 163  current & past oems profiled
SSD news ..
image shows megabyte waving the winners trophy - there are over 200 SSD oems - which ones matter? - click to read article
top SSD companies ..
disk writes per day in enterprise SSDs
DWPD in SSDs ..
click to read article - sugaring  MLC for the enterprise
hardening flash ....


by Zsolt Kerekes, editor -

I used RAM SSDs (both SCSI based and native backplane bus based) back in the 1980s to accelerate Oracle on servers but most often in ultrafast embedded real-time platforms.

And I've been writing about RAM SSDs in my enterprise buyers guides for over 20 years. So I've seen this market in many different phases of its life.

The enterprise SSD market - which used to be 100% RAM based over 10 years ago - is now overwhelmingly dominated by flash. You can read a summary of how this tranformation took place in the evolution of enterprise flash - a 10 year history

Reports from SSD vendors suggest that RAM SSDs and RAM cache in flash SSDs probably account for no more than 1 to 2% of all enterprise SSD capacity.

But RAM SSDs are not entirely dead, in new architecture design - and will remain an essential way to unbottleneck large clusters of fast flash SSDs.

For example it's a shared multiported ultrafast RAM replication resource which is at the heart of A3CUBE's PCIe fabric - RONNIEE Express (launched in February 2014) which offers a new way to leverage huge populations of PCIe SSDs and servers.

Having said that - the ratio of real RAM - even inside servers - is under attack and likely to shrink due to a new type of flash SSD - memory channel SSDs - which you can read more about in the special directory here on
SSD ad - click for more info
When flash SSDs aren't fast enough!
RAM based SSDs are the original type of solid state disk and have been around for decades.

They rely on batteries to retain data when power is lost. Most models also include internal hard disk drives to which data is saved under battery power, so that data is not lost when the battery runs down. This hybrid technology means that RAM based SSDs are more bulky than flash counterparts and RAM SSDs are unable to operate in the same range of hostile environments.

RAM based SSDs are mostly used in enterprise server speedup applications. The fastest RAM SSDs are faster than the fastest flash SSDs. But for many server speedup applications flash SSDs are fast enough.

Unlike flash SSDs, RAM based SSDs never had restrictions on the number of write cycles. That made them more popular in enterprise acceleration applications in the past. But write endurance problems may be a thing of the past for flash.

Like hard disks - RAM SSDs have symmetric read/write IOPS. That's another big difference between RAM and flash SSDs.

The fastest flash SSDs available in 2009 had achieved parity between random read and write IOPS.

But that's not how transaction based applications work. The important differentiator here is repeat again write IOPS. If you compare that between RAM and flash based SSDs - the RAM SSDs are upto 100x faster - even when the datasheets suggest they look the same.

On the other hand - in some enterprise applications - like IPTV servers - the random write IOPS rarely repeats in the same memory space during milli-second timeframes - and in these video server apps - flash really does perform as well as RAM - and is much cheaper.

Latency figures quoted by many flash SSD products can also look very similar to those for RAM SSDs. But low random write latency doesn't mean that the data has actually hit the flash media yet - as you'll find if you try to read back the data and rewrite to the same block.

There are also some non volatile memory products such as PRAM, FRAM and RRAM which are replacing flash in industrial applications - and which already offer 1 to 1 read/write performance. But their capacity is 2 orders of magnitude too low to be of use in server applications.

RAM SSDs cost about 3x as much as SLC flash SSDs for similar capacity in FC SAN rackmount systems - (based on pricing data 2011.)

The ideal choice of SSD depends on the specific server and application environment and cost / benefit analysis.

Not everyone needs or can afford the fastest SSDs. Some environments do. Others don't.

Identifying the right choice of SSD in the right place is a complex decision - which requires a high degree of SSD education and trust in the vendor.

More articles about the problems and solutions related to accelerating enterprise server apps can be seen on the SSD ASAPs page.
how fast can your SSD run backwards?
Editor:- April 20, 2012 - today published a new article which looks at the 11 key symmetries in SSD design.

SSDs are complex devices and there's a lot of mysterious behavior which isn't fully revealed by benchmarks and vendor's product datasheets and whitepapers. Underlying all the important aspects of SSD behavior are asymmetries which arise from the intrinsic technologies and architecture inside the SSD.

Which symmetries are most important in an SSD? - That depends on your application. But knowing that these symmetries exist, what they are, and judging how your selected SSD compares will give you new insights into SSD performance, cost and reliability.

There's no such thing as - the perfect SSD - existing in the market today - but the SSD symmetry list helps you to understand where any SSD in any memory technology stands relative to the ideal. And it explains why deviations from the ideal can matter.
SSD symmetries article The new article unifies all SSD architectures and technologies in a simple to understand way. to read the article
RAM vs flash SSDs decision tipping point
Editor:- in December 2010 - I interviewed Jamon Bowen, Director of Sales Engineering for Texas Memory Systems and asked him about the use of SSDs in financial applications like banks and traders - a market which he said accounts for most of their RAM SSD sales.

The company which started in RAM SSDs over 30 years ago - now sells more flash SSDs than RAM SSDs (even though the product brand for both types of SSD is confusingly called RamSan.) Bowen said that flash is 70% of their business.

Jamon Bowen said that in many bank applications RAM SSDs are actually cheaper than flash - because of the small size of the data. TMS still sell a lot of 16GB RAM SSDs.

Production bank systems are typically shared by many hosts and get a lot of write IOPS / capacity. To achieve the same reliability and latency with flash would require over provisioning which would drive the cost up.

He suggested a simple rule of thumb for intensive IOPS bank SSDs on the SAN
  • < 128GB capacity - RAM SSDs cheaper
  • 128GB to 4TB capacity - middle ground could be either - or determined by other constraints
  • > 4TB - flash SSDs cheaper
Jamon Bowen said that the analysis side of operations in banks is different. That tends to have much larger data sets and is more read than write intensive. In these apps - flash SSDs are usually more economic.
a classic ad from SSD market history
Curtis solid state disks
Clipper II
5 1/4" SCSI Solid State Disks
from Curtis

(ad appeared on
in January 2000)

storage search banner

Everspin enters NVMe PCIe SSD market

Editor:- March 8, 2017 - Everspin today announced it is sampling its first SSD product an HHHL NVMe PCIe SSD with upto 4GB ST-MRAM based on the company's own 256Mb DDR-3 memory.

The new nvNITRO ES2GB has end to end latency of 6µS and supports 2 access modes:- NVMe SSD and memory mapped IO (MMIO).

Everspin says that products for the M.2 and U.2 markets will become available later this year. And so too will be higher capacity models using the company's next generation Gb DDR-4 ST-MRAM.

Editor's comments:- Yes - you read the capacity right. That's 4GB not 4TB and certainly not 24TB.

So why would you want a PCIe SSD which offers similar capacity to a backed RAM SSD from DDRdrive in 2009? And the new ST-MRAM SSD card also offers worse latency, performance and capacity than an typical hybrid NVDIMM using flash backed DRAM today.

What's the application gap?

The answer I came up with is fast boot time.

If you want a small amount of low latency, randomly accessible persistent memory then ST-MRAM has the advantage (over flash backed DRAM such as you can get from Netlist etc) that the data which was saved on power down doesn't have to be restored from flash into the DRAM - because it's always there.

The boot time advantage of ST-MRAM grows with capacity. And depending on the memory architecture can be on the order of tens of seconds.

So - if you have a system whose reliability and accessibility and performance depends on healing and recovery processes which take into account the boot times of its persistent memory subsystems - then you either have the choice of battery backup (which occupies a large space and maintenance footprint) or a native NVRAM.

The new cards will make it easier for software developers to test persistent RAM tradeoffs in new equipment designs. And also will provide an easy way to evaluate the data integrity of the new memories.

Rambus and Xilinx partner on FPGA in DRAM array technology

storage glue chips
storage glue chips
Editor:- October 4, 2016 - Rambus recently announced a license agreement with Xilinx that covers Rambus' patented memory controller, SerDes and security technologies.

Rambus is also exploring the use of Xilinx FPGAs in its Smart Data Acceleration research program. The SDA - powered by an FPGA paired with 24 DIMMS - offers high DRAM memory densities and has potential uses as a CPU offload agent (in-situ memory computing).

can memory do more?

Editor:- June 17, 2016 - Should we set higher expectations for memory systems?

That's my new blog on

All the marketing noise coming from the DIMM wars market (flash as RAM and Optane etc) obscures some important underlying strategic and philosophical questions about the future of SSD.

When all storage is memory - are there still design techniques which can push the boundaries of what we assume memory can do?

Can we think of software as a heat pump to manage the entropy of memory arrays? (Nature of the memory - not just the heat of its data.)

Should we be asking more from memory systems? the blog

worst case response times in DRAM arrays

Editor:- March 8, 2016 - Do you know what the worst-case real-time response of your electronic system is?

Yes - I'm sure you do. That's why you're looking at this RAM SSDs page.

One of the interesting trends in the computer market in the past 20 years is that although general purpose enterprise servers have got better in terms of throughput - most of them are now worse when it comes to latency.

It's easy to blame the processor designers and the storage systems and those well known problems helped the SSD accelerator market grow to the level where things like PCIe SSDs and hybrid DIMMs have become part of the standard server toolset. But what about the memory?

Server memory based on DRAM isn't as good as it used to be. The details are documented in a set of papers in my new home page blog - latency loving reasons for fading out DRAM in the virtual memory slider mix.

If you're designing fast response computer systems with large amounts of data - then DRAM chips may become a smaller part of your external component mix in the future - especially after we get new types of processors being architected with SSDaware tiering. But that's another story. the article

Microsemi wins bid to acquire PMC

Editor:- November 25, 2015 - Microsemi today announced a definitive agreement to acquire PMC in a transaction valued at approximately $2.5 billion which represents a 77% premium to the closing price of PMC's stock as of Sept. 30, 2015.

"We are pleased PMC has accepted our compelling strategic offer, which clearly benefits shareholders of both Microsemi and PMC. We can now shift our focus to realizing the significant synergies identified during our comprehensive analysis," said James J. Peterson, Microsemi's chairman and CEO. "As we have previously stated, this acquisition will provide Microsemi with a leading position in high performance and scalable storage solutions, while also adding a complementary portfolio of high-value communications products."

Editor's comments:- 6 weeks ago it seemed that PMC would be acquired by a different company - Skyworks - which had offered to buy PMC for $2 billion. But within 10 days of that news - Microsemi announced an unsolicited offer which appeared at the time to be marginally higher.

The final deal (today) valued PMC at $500 million more than the original offer from Skyworks - which has never shipped a line of SSDs as far as I know. So in that respect - Microsemi - is better placed to understand and leverage PMC's strategic product lines.

sustainable roles for fast RAM SSDs
amid new memory architectures and SSD DIMM wars

Editor:- August 4, 2015 - Where do ultrafast RAM SSDs and companies like fit in the market today?

That's a question I put to John Overton, CEO - Kove.

You can see what he said about applications and the relative positioning of alternative big data memory types and architecture in the article here.

A3CUBE shows shape of R/W in remote shared memory fabric

Editor:- April 14, 2015 - There was a disproportionate amount of reader interest last year in A3CUBE - which was one of those rare companies which entered the Top SSD Companies list within a single quarter of exiting stealth mode or launching their first product. At that time they hadn't shipped any production products so we had to make some guesses about how the architecture would work with different R/W demands.

R/W performance of 4 node remote PCIe shared memoryWith any remote memory caching system there are always some types of R/W activities which work better than others and now we can get an idea of the headroom in A3CUBE's remote PCIe shared memory from a new slidedeck released by the company (Fortissimo Foundation - all NVMe solution some benchmarks) which is based on a 4 server node configuration.

In this 13 slide presentation - the most interesting for me was #12 - which shows random writes. A3CUBE says "This test measures the performance of writing a file with accesses being made to random locations within the file."

The throughput range is typically 700MB/s to 8GB/s. The low end is more impressive than it first appears - when you consider that it's a 4KB record changed within a remote 64KB file. ...see the presentation

HGST rekindles concept of a PCM based PCIe SSD

Editor:- August 4, 2014 - HGST today announced it will demonstrate a PCM PCIe SSD concept at the Flash Memory Summit. HGST says the demonstration model delivers 3 million random read IOPS (512 Bytes) and a random read access latency of 1.5 microseconds.

Editor's comments:- Micron funded the world's first enterprise PCM PCIe SSD demo 3 years ago (in June 2011). The storage density of PCM resulted in an SSD which had pitifully low capacity compared to flash memory at that time - and earlier this year (in January 2014) there were reports that Micron had temporarily abandoned this idea.

Is HGST really going to wander into memory space where even the memory makers don't want to go? Or is this just a market signal that HGST isn't just looking at short term SSD product concepts?

SanDisk extends the reach of its SSD software platform

Editor:- July 8, 2014 - 2 weeks ago SanDisk announced a new enterprise software product - ZetaScale - designed to support large inmemory intensice applications.

I delayed writing about it at the time - until I learned more. But now I think it could be one of the most significant SSD software products launched in 2014 - because of the freedom it will give big memory customers (in the next 2-3 years) about how they navigate their tactical choices of populating their apps servers with low latency flash SSD hardware.

what is ZetaScale?

SanDisk says - "ZetaScale software's highly parallelized code supports high throughput for flash I/O, even for small objects, and optimizes the use of CPU cores, DRAM, and flash to maximize application throughput. Applications that have been flash-optimized through the use of ZetaScale can achieve performance levels close to in-memory DRAM performance."

ZetaScale is SSD agnostic. "ZetaScale is compatible with any brand of PCIe, SAS, SATA, DIMM or NVMe connected flash storage device, providing customers the ability to choose, avoiding hardware vendor lock-in."

I was curious to see how this new product - which is a toolkit for deploying flash with tiering to DRAM as a new memory type - fitted in with other products - from SanDisk and from other vendors which also operate in this "flash as a big memoryalternative to DRAM" application space .

So I asked SanDisk some questions - and got some interesting answers.
  • Where does the ZetaScale product come from?

    SanDisk - ZetaScale builds upon our Schooner acquisition technology for additional use cases and flash deployment models.

    ZetaScale allows any developer to better tune their applications for flash-based environments, no matter which vendors hardware or interface is being leveraged. Thus, ZetaScale represents a major step forward in our vision of the flash-transformed data center—empowering software developers to scale and enhance their applications to meet today's big data and real-time analytics demands, while lowering TCO.
  • How much commonality is there between ZetaScale and FlashSoft product offerings?

    ZetaScale and FlashSoft software are complementary and orthogonal.

    FlashSoft provides direct-attached flash-based caching for NAS and SAN devices, with the goal of improving performance for unmodified applications running on a server.

    ZetaScale software provides a flash and multi-core optimization library that applications can integrate to allow them to achieve 3x times the performance improvement from flash alone.

    Both ZetaScale and FlashSoft software provide their benefits in bare metal and virtualized environments
  • Does ZetaScale support ULLtraDIMM?

    Yes. The software is compatible with any brand of PCIe, SAS, SATA, DIMM or NVMe connected flash device, enabling users to avoid vendor lock-in. However, the software does not get embedded into any SSD.
  • How would ZetaScale fit into a future SanDisk product line which also includes Fusion-io?

    SanDisk cannot comment on open M&A activity. As usual, all planning surrounding the product portfolio and roadmap will begin following the close of the acquisition.
Editor's comments:- overall I'd have to rate SanDisk's - ZetaScale as one of the most significant SSD software products launched in 2014.

From a technical point of view - it's a toolkit which will enable architects of SSD apps servers with very large in memory databases to decouple themselves fromdeep dives into specific low latency SSD products. Instead of gambling on whether they should exploit particular features which come with particular low latency SSDs - they can instead use ZetaScale as the lowest level of flash which their apps talk to. And that will change markets.

And although SanDisk didn't want to comment on how this would be positioned against Fusion-io's VSL - it's undeniable that in some applications it does compete today.

Although I wouldn't be surprised to see - a year after the acquisition (if it goes ahead) ZetaScale could be useful as a way of introducing new customers to the ioMemory hardware environment - without those customers having to make a hard commitment to the rest of Fusion-io's software.

And - looking at the memory channel SSD market - it also means that SanDisk software might be a safer standard for future customers of any DDR4 or HMC SSDs which might emerge from competitor Micron which - unlike SanDisk - hasn't demonstrated yet any strong ambitions in the SSD software platform market.

is there a market for I'M Intelligent Memory inside SSDs?

Editor:- June 4, 2014 - Are there applications in the SSD market for DRAM chips which integrate ECC correction inside the RAM chip - and which plug into standard JEDEC sockets?

That was the question put to me this afternoon by Thorsten Wronski - whose company MEMPHIS Electronic AG distributes I'M Intelligent Memory in Europe.

Thorsten told me he's had a good reaction from the SSD companies he's spoken to - which is why he phoned.

But in a long conversation about the economics and architectures of end to end error correction in SSDs and the different ratios of RAM cache to flash in SSDs - I told him that my initial reaction was he should look at embedded applications - which depend on the reliability of a single SSD - rather than enterprise systems in which the economics analysis for arrays point to a system wide solution rather than a point product fix.

The interesting thing is he said he's done tests on the new I'M memory as drop in replacements for unprotected memory designs- in which he accelerated the likely incidence of error events by increasing the interval between refreshes and raising the temperature.

Here's what he said.

"We assembled a standard 1GB unbuffered DIMM with 8 chips of 1Gbit ECC DRAM. Then we put this into a test board and ran RSTPro (a very strong memory test software). No error found.

Next we put the whole board into a temperature chamber at 95°C, which normally requires the refresh rate to be doubled (32mS instead of 64mS). No error found.

Finally we wrote a software to change the refresh-register of the CPU on the board, so we were able to set higher values. The highest possible was 750mS, so the DRAM did almost not get any more refreshes. Still it continued working in RSTPro without a single error for 24 hours.

We tried the same with Samsung and Hynix modules, but none of them came even close to those results. Most failed at refresh-rates of 150 to 200 mS, which is not bad indeed. Many more tests will follow."

Editor's comments:- the reason I mention this - is because adapting the refresh rate was one of the things mentioned in my recent blog - Are you ready to rethink RAM?

However - most of the leading SSDs in industrial markets don't have RAM caches for other reasons (to reduce the physical space, power consumption, hold-up time, or because don't need the performance). So I told Thorsten I don't see an industry wide demand inside SSDs. But some of you might already have thought of applications.

See also:- I'M ECC DRAM product brief (pdf)

are you ready to adapt to new ways of thinking about enterprise RAM?

Editor:- April 2, 2014 - Are you ready to rethink what you think about enterprise DRAM?

The revolution in in use-case-aware intelligent flash could crossover to DRAM. These ideas are brought together in the new home page blog on the article

A3CUBE unveils PCIe memory fabric for
10,000 node-class PCIe SSD architectures

Editor:- February 25, 2014 - PCIe SSDs can now access a true PCIe connected shared memory fabric designed by A3CUBE - which exited stealth today with the launch of their remote shared broadcast memory network - RONNIEE Express - which provides 700nS (nanoseconds) raw latency (4 byte message) and which enables message throughput - via standard PCIe - which is 8x better than InfiniBand.

Editor's comments:- I spoke to the company's luminaries recently - who say they intend to make this an affordable mainstream solution.

The idea of using PCIe as a fabric to share data at low latency and with fast throughput across a set of closely located servers isn't a new one.

The world's leading PCIe chipmaker PLX started educating designers and systems architects about these possibilities a few years ago - as a way to elegantly answer a new set of scalability problems caused by the increasing adoption of PCIe SSDs. These questions include:-
  • how do you make this expensive resource available to more servers?
  • how do you enable a simple to implement failover mechanism - so that data remains accessible in the event of either a server or SSD fault?
In the least year or so - we've seen most of the leading vendors in the enterprise PCIe SSD market leverage some of the new features in PCIe chips - to implement high availability SSDs with low latency.

But although there are many ways of doing this - the details are different for each vendor.

And - until now - if you wanted to share data at PCIe-like latency across a bunch of PCIe SSDs from different companies - located in different boxes - the simplest way to do that was to bridge across ethernet or infiniband. - And even though it has been technically possible with standard software packages - the integration, education and support issues - compared to legacy SAN or NAS techniques would be extremely daunting.

That's where A3CUBE comes into the picture. Their concept is to provide a box which enables any supported PCIe device to connect to any other - at low latency and with high throughput - in an architecture which scales to many thousands of nodes.

At the heart of this is a shared broadcast memory window - of 128Mbytes - which can be viewed simultaneously by any of the attached ports.

If you've ever used shared remote memory in a supercomputer style of system design at any time in the past 20 years or so - you'll know that the critical thing is how the latency grows as you add more ports. So that was one of the questions I asked.

Here's what I was told - "The latency is related to the dimension of the packet for example: In a real application using a range of 64-256 bytes of messages the 3D torus latency doubled after 1,000 nodes. With larger packets, the number of nodes to double the latency becomes grater. But the real point is that the latency of a simple p2p in a standard 10GE is reached after 29,000 nodes.

"A more clear example of the scalability of the system is this. Imagine that an application experiences a max latency of 4 us with 64 nodes, now we want to scale to 1,000 nodes the max latency that the same application experience will became 4.9 us. 0.9 us of extra latency for 936 more nodes."

Editor again:- Those are very impressive examples - and demonstrates that the "scalability" is inherent in the original product design.

A3CUBE didn't want to say publicly what the costs of the nodes and the box are at this stage. But they answered the question a different way.

Their aim is to price the architecture so that it works out cheaper to run than the legacy (pre-PCIe SSD era) alternatives - and they're hoping that server oems and fast SSD oems will find A3CUBE's way of doing this PCIe fabric scalability stuff - is the ideal way they want to go.

There's a lot more we have to learn - and a lot of testing to be done and software to be written - but for users whose nightmare questions have been - how do I easily scale up to a 10,000 PCIe SSD resource - and when I've got it - how can I simplify changing suppliers? - there's a new safety net being woven. Here are the essential details (pdf).

3.5" RAM SSDs - from OCZ?

Editor:- December 13, 2013 - It's been ages since I last heard any SSD makers talking about launching new 3.5" RAM SSDs - but the idea has resurfaced in a blog - SSD Powered Clouds – the Times They are a Changing - by Ravi Prakash, Director of Product Management - OCZ - who says such devices might be useful in write intensive applications such as a ZFS intent log (ZIL).

Ravi said he's interested in hearing from anyone who might be interested in a future RAM SSD concept - he calls Aeon - which "will deliver 140,000 sustained IOPS with 4KB blocks and media latency of less than 5 µsec in a familiar 3.5" drive form factor." the article

See also:- on the subject of the name Aeon - and more like this - take a look at Inanimate Power, Speed and Strength Metaphors in SSD brands

McObject shows in-memory database resilience in NVDIMM

Editor:- October 9, 2013 - what happens if you pull out the power plug during intensive in-memory database transactions? For those who don't want to rely on batteries - but who also need ultimate speed - this is more than just an academic question.

Recently on these pages I've been talking a lot about a new type of memory channel SSDs which are hoping to break into the application space owned by PCIe SSDs. But another solution in this area has always been DRAM with power fail features which save data to flash in the event of sudden power loss. (The only disadvantages being that the memory density and cost are constrained by the nature of DRAM.)

McObject (whose products include in-memory database software) yesterday published the results of benchmarks using AGIGA Tech's NVDIMM in which they did some unthinkable things which you would never wish to try out for yourself - like rebooting the server while it was running... The result? Everything was OK.

"The idea that there must be a tradeoff between performance and persistence/durability has become so ingrained in the database field that it is rarely questioned. This test shows that mission critical applications needn't accept latency as the price for recoverability. Developers working in a variety of application categories will view this as a breakthrough" said Steve Graves, CEO McObject.

Here's a quote from the whitepaper - Database Persistence, Without The Performance Penalty (pdf) - "In these tests eXtremeDB's inserts and updates with AGIGA's NVDIMM for main memory storage were 2x as fast as using the same IMDS with transaction logging, and approximately 5x faster for database updates (and this with the transaction log stored on RAM-disk, a solution that is (even) faster than storing the log on an SSD). The possibility of gaining so much speed while giving up nothing in terms of data durability or recoverability makes the IMDS with NVDIMM combination impossible to ignore in many application categories, including capital markets, telecom/networking, aerospace and industrial systems."

Editor's comments:- last year McObject published a paper showing the benefits of using PCIe SSDs for the transaction log too. They seem to have all angles covered for mission critical ultrafast databases that can be squeezed into memory.

in memory database even better with FIO's flash

Editor:- November 20, 2012 - McObject recently released new benchmark results which indicate that the in-memory database company is not so unfriendly to flash SSDs as you may have thought from reading earlier company positioning papers.

It seems that a software product - which was originally designed for the DRAM-HDD world - is a good fit in the flash SSD world too - if you have the right scale of data and the right SSD. more

Micron sources power holdup technology for NVDIMMs

Editor:- November 14, 2012 - Micron has signed an agreement with AgigA Tech to collaborate to develop and offer nonvolatile DIMM (NVDIMM) products using AgigA's PowerGEM (sudden power loss controller and holdup modules).

STEC discloses RAM vs flash SSD revenues

Editor:- November 7, 2012 - among other things STEC revealed yesterday in its earnings conference call that RAM SSDs were approximately 4% of its revenues in the recent quarter.

AMD will rebrand Dataram's RAMDisk software

Editor:- September 6, 2012 - Dataram today announced it will develop a version of its RAMDisk software which will be rebranded by AMD in Q4 under the name of Radeon RAMDisk and will target Windows market gaming enthusiasts seeking (upto 5x) faster performance when used with enough memory. See also:- SSD software

Kaminario recommends you read SSD Symmetries article

Editor:- June 15, 2012 - I accidentally discovered today that earlier this week Gareth Taube, VP of Marketing at Kaminario published a new blog in which he recommends my article about SSD Symmetries.

Gareth says "Flexibility, such as being able to integrate multiple memory technologies into a single box (like Kaminario's K2-H), is going to be increasingly important to customers who want efficiency and customization options. This is especially true because there are many memory innovations coming on the near horizon." Gareth's blog

Editor's comments:- when I was writing the symmetry article one of the things I had in mind to do was to put more examples in it. Then I realized that having lots of examples would simply make the article unreadable.

One of the examples I was going to use for good roadmap symmetry (but then forgot to put anywhere) was in fact Kaminario - because they can leverage off whatever Fusion-io does with flash (or other nv memory) and furthermore Kaminario can also leverage off whatever server makers do with CPUs and RAM. Roadmap symmetry is a long term consideration - important for big users who don't like supplier churn and important for VCs and investors too.

...Later:- I'm glad I wrote that bit about "roadmap symmetry" - because by a spooky coincidence - 3 days later we got the news that Kaminario's investors still love what they do.

June 18, 2012 - Kaminario today announced it has secured a $25 million series D round of funding, bringing its total funding to $65 million.

sharpen your SSD R/W grid latency weapons to 5µS

Editor:- May 9, 2012 - Kove has published some new record latency numbers for its fast RAM SSD - the XPD L2 - which has achieved continuous and sustained 5 microsecond random storage read and write when connected via 40Gb/s InfiniBand adapters from Mellanox .

Kove's system has good R/W symmetry which the company says - is not subject to periodic performance jitter or "periodicity". Even under constantly changing disk utilization, it delivers uniform, predictable, and deterministic performance.

"The Kove XPD L2... allows high performance applications to use storage as a weapon rather than accept it as a handicap," said Kove's CEO, John Overton. "We are pleased to set a new bar height for storage latency."

new article on RAM SSDs

Editor:- April 22, 2011 - Long Live RAM SSD is a new article by Woody Hutsell which reflects on how the RAM SSD market - which many observers once believed would be killed by flash - has got a great future.

the Fastest SSDs
the Top 10 SSD Companies
RAM Cache Ratios in flash SSDs
Why I Tire of - "Tier Zero Storage"
RAM versus Flash SSDs - which is Best?
the new way of looking at Enterprise SSDs
Introducing the concept of RAMClouds (pdf)
when the SSD brand sends the wrong signal - RamSan and Dataram
Surviving SSD sudden power loss
Why should you care what happens in an SSD when the power goes down?

This important design feature - which barely rates a mention in most SSD datasheets and press releases - has a strong impact on SSD data integrity and operational reliability.

This article will help you understand why some SSDs which (work perfectly well in one type of application) might fail in others... even when the changes in the operational environment appear to be negligible.
image shows Megabyte's hot air balloon - click to read the article SSD power down architectures and acharacteristics If you thought endurance was the end of the SSD reliability story - think again. the article
RAM based SSDs - image shows Megabyte ramming his way into the cheese store
sometimes you just can't wait
SSD ad - click for more info
The industry will learn a lot about the "goodness" of new memory tiering products by stressing them in ways which the original designers never intended.
RAM disk emulations in "flash as RAM" solutions
RAM based SSD makers list
ACARD Technology


Avere Systems




Density Dynamics


Dynamic Solutions International



Real Ram Disk

Solid Access Technologies

Solid Data Systems


Texas Memory Systems

Third I/O


Violin Memory

There used to be many other RAM SSD companies at earlier stages in SSD market history - for example Cenatek, Imperial Technology and Platypus Technology - which are no longer in business.
"Across the whole enterprise - a single petabyte of SSD with new software could replace 10 to 50 petabytes of raw legacy HDD storage and still enable all the apps to run much faster..."
the enterprise SSD software event horizon
SSD ad - click for more info
the Problem with Write IOPS in flash SSDs
Random "write IOPS" in many of the fastest flash SSDs are now similar to "read IOPS" - implying a performance symmetry which was once believed to be impossible.

So why are flash SSD IOPS such a poor predictor of application performance? And why are users still buying RAM SSDs which cost an order of magnitude more than SLC? (let alone MLC) - even when the IOPS specs look superficially similar?

This article tells you why the specs got faster - but the applications didn't.
the problem with flash SSD  write IOPS And why competing SSDs with apparently identical benchmark results can perform completely differently. the article

STORAGEsearch is published by ACSL