click to visit home page
leading the way to the new storage frontier
after AFAs - what's next?
a winter's tale of SSD market influences
endurance? - changes in the forever war
Capacitor hold up times in 2.5" military SSDs
where are we heading with memory intensive systems?
optimizing CPUs for use in SSDs in the Post Modernist Era
when flash is faster than DRAM - the Memory1 benchmarks

SSD ad - click for more info
cool runnings

Rambus to coach faster DRAM
Editor:- April 20, 2017 - Back in the early 1990s it was not uncommon to hear about specialist server companies which were using peltier effect heat sinks to refrigerate the fastest workstation processors so that they could be run at higher clock speeds. But this kind of extreme approach to server acceleration only provided short term competitive gains in a single dimension.

One of the biggest bottlenecks in the past decade has been RAM architecture and DRAM implementation itself. (You can read more articles about the background to this on the DRAM resource page here on

A new angle on extending the performance of DRAM was announced recently by Rambus and Microsoft who are collaborating on the design of prototype super cooled DRAM systems to explore avenues of improvement in latency and density due to physics effects below -180 C.

A new article - Rambus, Microsoft Heat Up With Cold DRAM - by Junko Yoshida , Chief International Correspondent - EE Times - discusses these plan in more detail.

In the article - Craig Hampel, chief scientist at Rambus, told EE Times that "Microsoft isnt alone... heavy data center users like Google, Facebook and Amazon are all in search of new memory architecture. Indeed, these tech giants who have primarily grown their business via their technological prowess in software development are now finding the future of their business growth severely constrained by hardware advancements." the article

Editor's comments:- At room temperature the main problem in fast DRAM systems is that the energy required for refresh cooks the chips which means cells lose charge faster which creates data integrity risks which in turn needs more frequent refresh.

This is a limiting design factor.

It means that even if you have a miraculous packaging technique which can sandwich more chips into a box - DRAM loses out to nvm technologies which don't require refresh - when the scale of the installed capacity (and watts) in the box is high.

Because if you can't fit enough RAM into the same single box then the memory system accrues a box-hopping fabric-latency penalty which outweighs the benefits of the faster raw memory chip access times inside the original box.

If you freeze DRAM then the refresh cycle can be extended (which means you can pack more capacity in a box) but also the native transit time for data in the copper interconnects and inside the silicon gets faster too.

Although Rambus and Microsoft are pitching this a progressive research exercise I don't think that it will provide a general solution for data intensive factories.

While it's a good thing for researchers to play around and explore the limits of what can be done with all kinds of memory devices - I think that the answer to greater performance lies in new architectures rather than freezing old ones.
SSD ad - click for more info
Are we there yet?
Editor:- April 7, 2017 - After more than 20 years of writing guides to the SSD and memory systems market I admit in a new blog on - Are we there yet? - that when I come to think about it candidly the SSD industry and my publishing output are both still very much "under construction". the article
NVMe over Fabrics - market experiences
Editor:- March 31, 2017 - The state of the NVMe SSD and fabric market and its growth expectations are conveniently summarized in a new presentation - Experiences with NVMe over Fabrics (pdf) - by Mellanox. Among other things:-
  • 40% of AFAs will be NVMe based by 2020
  • shipments of NVMe SSDs will grow to 25+ million by 2020
The idea of having a PCIe based SSD fabric which can be accessed by many servers and which combines the latency advantages of local PCIe SSDs with the essential hooks from past low latency server interconnects - specifically - RDMA - has been many years in the telling.

There have been 3 main ingredients to this market brew:-
  • something worthwhile sharing as a resource (low latency SSD pools)
  • a convenient way of connecting to them (a large installed base of server PCIe interface chips were the essential starting point - but it took many years for industry standards to get agreed)
  • software support - which ranges from the storage stack to multi-vendor fabric support.
This paper captures current expectations for how the market is expected to grow. the article (pdf)
Web-Feet Reports on NVM Market Shares in 2016
by Alan Niebel , CEO - Web-Feet Research - March 29, 2017
Not since 2000, have the memory suppliers been in an undersupply situation. It is the force which in 2H 2016 resulted in increasing memory prices for a number of reasons.

NAND vendors are producing 2D (planar) NAND at full capacity, while concurrently making the costly shift in production to 3D NAND.

DRAM has been so hot that the Big Three (Samsung, SK Hynix, and Micron) are shrinking their lithography below 1x and 1y, while maintaining their production at capacity.

Other foundries, like the memory manufacturers, are also running at capacity, trying to maintain balance with IoT, M-2-M, mobile and computing demands.

Even the NOR Flash market saw a reversal of 15+ years of market declines has been caught in the allocation/shortage scenario.

Yet, with all this current production building additional capacity takes time and has been fraught with technology hurdles that slowdown bit increases.

Although the NOR market is around 5% of NAND, NORs challenges represent a microcosm of the larger Flash and memory markets.

With the stronger demand for SoC (System on Chip) to satisfy the IoT and edge terminal requirements, foundries and ODMs are shifting their wafer mix away from standalone memories.

These SoCs are also being built at lithography nodes below 40nm, where most embedded NOR Flash cannot be built. Consequently, these SoCs will need KGD and standalone serial NOR components at 512Kbit-256Mbit densities and larger to fulfill the IoT system memory requirements.

In China, GigaDevice who supplies serial NOR is caught in a wafer allocation squeeze in not getting enough serial Flash wafers from their foundry who is making more SoCs (that need serial NOR).

Winbond who makes both DRAM and NOR/NAND has been riding the higher price DRAM wave and has limited any additional wafer allocations to NOR.

Micron has been rumored to have shut down NOR production at their Singapore fab in favor of NAND, which removes some NOR wafer capacity.

Cypress the NOR market leader is gradually moving their emphasis from commodity standalone NOR to a IoT systems memory module especially for the automotive market.

Finally, Macronix has regained more NOR market presence in allocating more NOR wafer production is still facing long lead times since demand is ever increasing on the constrained industry supply.

The net effect is NOR and other memory prices are increasing with supply constraints and vendors are on allocation for the foreseeable future.

In consolidating its annual results of each Flash memory vendors shipments, WebFeet Research found the 2016 Flash memory market to be $36.8 billion, an increase of 10% from 2015.

A substantial increase in 2016 revenues came from the NAND Flash market with a 10.7% growth rate, while the NOR market contracted flatly from 2015 by only -1.8 %.

Samsung was the perennial 2016 revenue market leader for all NV Memories and NAND, Cypress (Spansion) established itself as the NOR Flash and the NVRAM market leader, while Macronix regained the serial NOR leadership position.

The 2016 Non Volatile Memory Market Shares by Vendor report (by Web-Feet Research) discusses the impact of the mergers and acquisitions on the memory market, qualifies the migration of planar to 3D NAND, quantifies how fast the emerging NVM are growing including STT-MRAM and XPoint as well as the reemergence of RRAM and NRAM, and presents two forecasts for serial EEPROM showing the impact (slow initial adoption) of the Internet of Things (IoT) and its aggressive scenario.

This report, CS700MS-2017, is available for $2.5K and providers of the market share data can obtain the report at a discount.

For more info about these reports contact WebFeet Research at +1 831-869-8274 or

who will make enough flash?
Editor:- March 14, 2017 - A fab capacity view of the flash industry's ability to meet demand for memory this year is presented in - Shootout At Yokkaichi - the NAND Industry at the Crossroads by William Tidwell, Semiconductor Analyst who regularly writes about such things on Seeking Alpha.

Among other things Bill discusses the state of production maturity within the major memory companies in their transitions to 3D and says:-

"Industry productivity is still low due to a condition that could be called planar overhang - that being the amount of planar capacity that must be converted as fast as possible to 3D, so the company can take advantage of the denser 3D process. Unfortunately, this conversion process from planar to 3D is basically like buying a house that has to be completely renovated and then finding out that load-bearing walls are involved - and the foundation has to be reinforced."

The article's central theme is the imminent auction of Toshiba's flash assets, the main competitors and possible bidders, winners and losers.

Along the way you get a good feel for the investment and production dynamics which will shape the next few years of this industry. the article

historical timeline of 3D NAND flash memory

will there be enough flash to replace enterprise HDD?

boom bust cycles in memory markets - lessons for SSD

nand flash memory and other SSDward leaning nvms too

"Samsung Electronics will invest about 10 trillion won (US$8.7 billion) in Hwaseong Campus in Gyeonggi Province, South Korea to build a new line to produce DRAM."
news report in Business Korea (March 15, 2017)

SSD ad - click for more info

M.2 PCIe SSDs for secure rugged applications?
Editor:- March 20, 2017 - Do you know who makes M.2 PCIe SSDs which can operate at industrial temperatures and have security strong enough for a military application?

That's a question I was asked recently by a reader in the defense sector.

So I looked into it. He was right. They are hard to find. Nearly all the industrial M.2 SSDs are SATA and not PCIe.

The only companies which I have been able to confirm in this category (by direct contact rather than a promissory future product statement on a web page) are:- I became interested in the technical difficulties which might explain why there are so few suppliers right now.

Here's what I think is part of the explanation.

As you add operational requirements to the datasheet moving up from consumer to enterprise and then to industrial SSDs you also add circuits and components which compete for physical space, electrical power and cost in the total SSD design budget.
  • use of larger flash memory cell geometry (nanometer generation and coding type - for example SLC rather than MLC, or MLC rather than TLC) to ensure data integrity over a wider range of temperature and power supply quality environments
  • use of different flash SSD controllers

    Consumer and enterprise SSDs can use controllers which use more electrical power than industrial or embedded SSDs due to the ease of fitting the design into the heat rise budget.

    Industrial designs can't afford the same wattage in their controllers - because the heat generated would reduce the reliability of the SSD at the high end of its operating temperature range (70 to 85 degress C and sometimes 95 degrees) - or force the use of more expensive components elsewhere (to cope with the incremental heat rise.

    The tradeoffs made (typically lower wattage controller) is why industrial SSDs tend not to use CPU intensive data integrity management schemes like adaptive DSP. And that in turn means they need to use intrinsically higher quality memory.
When you add all the requirements together to make an industrial / military SSD capable of working reliably and shrink the size budget from a bigger to smaller form factor (2.5" to M.2) while at the same time asking for high performance too - it's a tough design problem to solve for the first time.

But once such products do became available from multiple sources then demand will grow (due to confidence in the equipment design community that they won't get stuck in an EOL rut from a single source dependency).

If you know of other secure erase, industrial operation M.2 PCIe SSD companies which are shipping products let me know and I'll mention them here.

I placed a query via linkedin but that didn't generated any other confirmed vendors.

"In the most recent quarter (ending January 31, 2017) we had more than one customer running large scale simulations and analytics replace over 20 racks (think 20 refrigerators of equipment) with a single FlashBlade (at 4U about the size of a microwave oven).

Such dramatic consolidation depends on storage software that has been designed for silicon rather than mechanical disk."
Scott Dietzen, CEO - Pure Storage - in his blog Delivering the data platform for the cloud era and the secular shift to flash memory (March 1, 2017)

Editor's comments:- this is another confirmation of the replacement ratio predictions in my (2013) blog - meet Ken - and the enterprise SSD software event horizon.

PS - Another thing which Scott Dietzen said in his new blog was...

"This year, the 8th since our founding and our 6th of selling, we expect to reach $1 billion in revenues."

Soft-Error Mitigation for PCM and STT-RAM
Editor:- February 21, 2017 - There's a vast body of knowledge about data integrity issues in nand flash memories. The underlying problems and fixes have been one of the underpinnings of SSD controller design. But what about newer emerging nvms such as PCM and STT-RAM?

You know that memories are real when you can read hard data about what goes wrong - because physics detests a perfect storage device.

A new paper - a Survey of Soft-Error Mitigation Techniques for Non-Volatile Memories (pdf) - by Sparsh Mittal, Assistant Professor at Indian Institute of Technology Hyderabad - describes the nature of soft error problems in these new memory types and shows why system level architectures will be needed to make them usable. Among other things:-
  • scrubbing in MLC PCM would be required in almost every cycle to keep the error rate at an acceptable level
  • read disturbance errors are expected to become the most severe bottleneck in STT-RAM scaling and performance
MRAM and PCM data integrity issues click to read the article (pdf)
He concludes:- "Given the energy inefficiency of conventional memories and the reliability issues of NVMs, it is likely that future systems will use a hybrid memory design to bring the best of NVMs and conventional memories together. For example, in an SRAM-STT-RAM hybrid cache, read-intensive blocks can be migrated to SRAM to avoid RDEs in STT-RAM, and DRAM can be used as cache to reduce write operations to PCM memory for avoiding WDEs.

"However, since conventional memories also have reliability issues, practical realization and adoption of these hybrid memory designs are expected to be as challenging as those of NVM-based memory designs. Overcoming these challenges will require concerted efforts from both academia and industry." the article (pdf)

Editor's comments:- Reading this paper left me with the confidence that I was in good hands with Sparsh Mittal's identification of the important things which need to be known.

See also:- an earlier paper by Sparsh Mittal - data compression techniques in caches and main memory
SSD ad - click for more info
diablo logo - click for more onfo


SSD news / the Top SSD Companies / SSD history

How does the memory famine impact industrial SSD service values?

Editor:- April 25, 2017 - 17 years ago I wrote a guide - Why I won't publish your press release? - the first of which still applies today - it's not newsworthy.

The context which brought it to mind was a press release earlier this month from Virtium saying it was now offering 2TB (MLC) and 1TB (iMLC) in its 2.5" industrial SATA SSD range.

That kind of capacity isn't news - as we reported the first rugged TB 2.5" SSD sampling on this page more than 8 years ago - in January 2009. (And the first industrial SATA 2.5" flash SSDs were shipping 13 years ago in July 2004 BTW.)

Is there anything else you can tell me? I asked.

(Like me - readers need to have a genuine reason to be interested.)

I asked about pricing - but the company said there are too many options and didn't want to be drawn on that. I learned the SLC version is rated at 30 DWPD. But that's not a news story either.

Around about this time I was thinking about the impacts of the flash memory famine on different types of companies in the SSD supply chain. (Whether it's a good thing or bad thing - depends which company you are.)

In the 1970s and 80s chip supply volatility due to semiconductor company book to bill ratios thrashing about all all over the place was a constant reality of being in the electronics business. In recent years we had the illusion that the boom to bust days of memory were over - due to better business management and better prediction systems. So when memory prices increased and supply dates went into "unknown" that was a shock to many people.

Continuity of supply and pricing expectations are just as important for industrial equipment makers as avoiding EOL shock and having stable BOMs.

Tactical stories about memory supply and logistics had come up in some recent conversations I had with industrial SSD makers Wilk Elektronik in Poland and Longsys in China.

So I asked this...

Does Virtium (which is headquartered in the USA) get involved in giving price projection promises to its contract system customers?

Scott Phillips, VP of marketing - Virtium told me ""While we can't 'promise' specific pricing into future quarters, we can provide supply-chain and associated pricing visibility to strategic customers up to one to two quarters out, based on NAND flash allocations, associated pricing and cost averaging of materials.

This at least helps customers plan. In other words, even though we are direct with suppliers, it does not completely insulate us from cost increases should suppliers decide to raise prices, but we can somewhat mitigate pricing volatility like what is found in the distribution channel currently."

There you go. Is that more interesting than just another 2.5" TB SSD story? I hope so.

now Cinderella industrial systems with "no-CPU" budgets and light wattage footprints can go to the NVMe speed-dating ball

Editor:- April 19, 2017 - A dilemma for designers of embedded systems which require high SSD performance is how can you get the benefits of enterprise class NVMe SSDs for simple applications - which integrate video for example - without at the same time escalating the wattage footprint of the entire attached micro server?

A new paper published today by IP-Maker - Allowing server-class storage in embedded applications (pdf) discusses the problem and how their new FPGA based IP enables any NVMe PCIe SSD to be used in embedded systems to provide sub-microsecond latency using "20x better power efficiency, and 20x lower cost compared to a CPU-based system."

image shows where the FPGA IP fits in the context of an embedded low power system using fast NVMe SSDs

The company says the NVMe host IP - which is now available - can be used in an FPGA connected between the PCIe root port and the cache memory, internal SRAM or external DRAM. It fully controls the NVMe protocol by setting and managing the NVMe commands. No CPU is required. It supports PCIe gen 3 x 8 interface.

Michael Guyard, Marketing Director said that - among other things - applications include:-
  • military recorders
  • portable medical imaging
  • mobile vision products - in robots and drones the article (pdf)

Editor's comments:- Now Cinderella embedded systems with low cost budgets and low wattage footprints can go to the enterprise NVMe performance ball. The new magic - in the form of the FPGA IP released today by IP Maker - has the potential to transform the demographics and class of SSDs seen in future industrial systems.

See also:- optimizing CPUs for use with SSDs, SSD glue chips

low yield at sub 20nm is root of DDR4 shortage says DRAMeXchange

Editor:- April 14, 2017 - Quality problems in DRAMs which have been sampling this year at the new sub 20nm generation from major suppliers is at the heart of the issues discussed in a new - market view blog by DRAMeXchange - which concludes that the contract prrice of 4GB DDR4 DRAM modules will rise 12.5% entering 2Q17.

Avril Wu, research director of DRAMeXchange said - "PC-OEMs that have been negotiating their second-quarter memory contracts initially expected the market supply to expand because Samsung and Micron have begun to produce on the 18nm and the 17nm processes, respectively. However both Samsung and Micron have encountered setbacks related to sampling and yield, so the supply situation remains tight..." the article

See also:- inside SSD pricing, storage market research companies

Tegile gets another $33 million funding

Editor:- April 11, 2017 - Tegile today announced $33 million in additional funding which was led by Western Digital and current investors such as Meritech Capital, Capricorn Investment Group, and Cross Creek Capital. With this financing, Tegile has raised a total of $178 million to date.

See also:- rackmount SSDs, hybrid storage arrays, VCs in SSDs

Tachyum promises 10x faster CPUs soon

Editor:- April 7, 2017 - I was fortunate enough to have had close relationships with technologists and marketers of high end server CPUs in the 1990s who explained to me in detail the peformance limitations of CPU clock speeds and memories which would prevent CPUs getting much faster beyond the year 2000 due to physics and the lost latency due to the coherency of signals when they left silicon and hit copper pads.

That was one of the triggers which made me reconsider the significance of the earlier CPU-SSD equivalence and acceleration work I had stumbled across in my work in the late 1980s and write about it in these pages when I explained (in 2003) why I thought the enterprise SSD market (which at that time was worth only tens of millions of dollars) had the potential to become a much bigger $10 billion market by looking at server replacement costs and acceleration as the user value proposition for market adoption and disregarding irrelevant concerns about cost per gigabyte.

I was surprised these equivalencies weren't more widely known. And that's why I recognized the significance of what the pioneers of SSD accelerators on the SAN were doing in the early 2000s.

It's taken 17 years - but the clearest ever expression of the CPU GHz problem and why server achitecture got stuck in that particular clock rut (for those of you who don't have the semiconductor background) appears in a recent press release from Tachyum which says (among other things)...

"The 10nm transistors in use today are much faster than the wires that connect them. But virtually all major processing chips were designed when just the opposite was true: transistors were very slow compared to the wires that connected them. That design philosophy is now baked into the industry and it is why PCs have been stuck at 3-4GHz for a decade with "incremental model year improvements" becoming the norm. Expecting processing chips designed for slow transistors and fast wires to still be a competitive design when the wires are slow and the transistors are fast, doesn't make sense."

The warm-up press release also says - "Tachyum is set to deliver increases of more than 10x in processing performance at fraction of the cost of any competing product. The company intends to release a major announcement within the next month or two." the article

Editor's comments:- Do I believe it's possible?

Yes - by discarding 2D designs of CPUs and maybe adding SSDera memory architecture in the CPU SoC. (I'm just guessing about these solutions BTW.) But if anyone knows how - then I'm prepared to give cofounder Rado Danilak the benefit of the doubt for such ambitious claims.

Toshiba's storage sale - update from Tom Coughlin

Editor:- April 4, 2017 - As previously reported Toshiba's memory and SSD business will be spun off to generate cash to plug losses in its nuclear generating business. A new article by Tom Coughlin, President Coughlin Associates - HDD Implications of Toshiba Memory Unit Sale - looks at the ramifications (no pun intended) for the hard drive market depending on which of the 10 or so potential bidders succeeds.

Among other things Tom says - "Selling the HDD unit along with the flash unit could be one outcome... The end result could be very interesting and create very strange bed-fellows." the article

Walmart generates 2.5PB of analyzable data every hour

Editor:- April 3, 2017 - Walmart's Data Café is a private cloud which supports business decision makers in its 20,000 stores who can access over 200 streams of internal and external data, including 40 petabytes of recent transactional data, which can be modelled, manipulated and visualized.

I learned the above stats in a new case study - Big Data At Walmart: How 40+ Petabytes Improves Retail Decision-Making by business author Bernard Marr who tells us how teams from any part of the business are invited to bring their problems to the analytics experts and then see a solution appear before their eyes on the nerve centre's touch screen "smart boards". the article

Transcend's new M.2 SSD

Editor:- March 27, 2017 - Transcend today launched the MTE850 - a family of PCIe Gen 3 x 4 M.2 SSDs with 512GB of 3D MLC NAND capacity and R/W speeds upto 2.5GB/s and 1.1GB/s, respectively aimed at the consumer market.

CPUs for use with SSDs in the Post Modernist Era of SSD

Editor:- March 22, 2017 - A new blog on - optimizing CPUs for use with SSDs in the Post Modernist Era of SSD and Memory Systems - was prompted by a question from a startup which is designing new processors for the SSD market. the article

Google joins investors in Avere Systems

Editor:- March 21, 2017 - Avere Systems today announced the closing of a $14 million Series E funding with participation from existing investors Menlo Ventures, Norwest Venture Partners, Lightspeed Venture Partners, Tenaya Capital and Western Digital Capital and new investor Google Inc.

The new investment brings Avere's total funds raised to $97 million, and will be used to expand the company's hybrid cloud product offerings so that more organizations can easily take advantage of the public cloud.

Editor's comments:- A year ago Avere announced it had been named "Google Cloud Platform Technology Partner of the Year" for 2015.

See also:- VCs in SSDs

NVMdurance has US patent for Adaptive Flash Tuning

Editor:- March 21, 2017 - NVMdurance today announced that it has been granted US patent 9,569,120 for Adaptive Flash Tuning.

This patent covers NVMdurance's Pathfinder and Navigator software, which discover optimal flash trim sets for the target application and implement a set of optimization techniques that constantly monitor the NAND flash health and autonomically adjusts the operating parameters in real time.

Before the flash memory product goes into production, NVMdurance Pathfinder determines multiple sets of viable flash register values, using a custom-built suite of machine-learning techniques. Then, running on the flash memory controller utilized in SSDs or other storage product, NVMdurance Navigator chooses which of these predetermined sets to use for each stage of life to increase the flash memory endurance.

Editor's comments:- The things which make NVMdurance's technology processes a viable business model for SSD partners are that the heavyweight processing is done back at HQ as part of the memory characterization and controller modeling which means that the delivery overhead in each shipped product is lightweight and protects the stakeholder's IP.

And another thing is that no one has come up with any better ideas for a way to roll out a new SSD with new flash memory encapsulated in such a predictable set of algorithmically bounded phases which reduces the worst risks (of delay and misfire) which come from picking such magic numbers via the organic talent (human) alternatives.

See also:- Adaptive flash care management & DSP IP in SSDs, the limericks of SSD endurance, the 5 stage life cycle budget of extended flash endurance (pdf)

Intel is sampling 3DXpoint PCIe SSDs

Editor:- March 19, 2017 - Intel today announced that it is sampling its long awaited first enterprise SSD which uses 3DXpoint (Optane) memory and which is aimed at the HHHL PCIe SSD market.

The P4800X Series (pdf) has a PCIe 3.0 x 4 NVMe interface and provides upto 375GB capacity, 500K mixed IOPS (4KB), block level R/W latency 150/200S (queue depth 16), and endurance of 30 DWPD for 3 years (equivalent to 18 on a 5 year adjusted basis).

The new drives are supported by caching / tiering software (Intel Memory Drive Technology) which collaborates with motherboard DRAM resources to transparently provide an emulated 3DX as RAM memory pool.

This is similar in concept to earlier software products in the market from various vendors which have supported flash as RAM.

As widely expected the new SSDs have worse performance and higher pricing than Intel had indicated at the first public unveiling in the summer of 2015.

A rounded perspective can be seen in a new blog Intel Announces Optane SSDs for the Enterprise - by Jim Handy - founder Objective Analysis.

Among other things Jim says "Intel has announced an SSD whose performance is close to that of NAND flash at a price that is close to that of DRAM. How did that happen?" the article

Editor's comments:- As the new P4800X is not hot pluggable and as its main difference to previous flash SSDs from Intel is its support as a tiered memory - the most obvious role for a competitive comparison is memory channel based NVDIMM solutions - in particular the Memory1 product from Diablo which provides 128GB of flash as RAM per DIMM socket - and upto 2TB in a 2 socket server.

Density comparison - Optane PCIe and Flash DDR-4

On a density level the current technology at a superficial level appears to be 1x HHHL Optane PCIe SSD gives the equivalent emulated memory as 3x DDR-4 Memory1 DIMMs.

Although this doesn't take into account how much DRAM is needed to support each type of configuration it's a good enough comparison point to start with.

But there are some areas of doubt in making such a comparison.

1 - Due to the scarcity of the new Optane products we haven't seen published benchmarks yet which could show how effectively the Optane system works (as an integrated memory and software solution). The raw latency figures (at the SSD datasheet level) don't conclusively point towards it being either a better or worse performer (than the flash based Memory1).

2 - From a customer point of view a key factor between the 2 products isn't just the level of density today. (Let's assume they're both similar right now). If we compare the potential capacity roadmaps - 3DXpoint (as a new technology) has been difficult for Intel / Micron to get started and we can't have any confidence yet that density will improve (from the technology difficulty angle) or indeed that new generations will even get the investment to improve at all.

In contrast - all flash based solutions get a helping hand from the entire flash industry's striving to keep improving density. So it's less risky to assume that a flash based system will probably increase in storage density during the next 3 years than any new alternative NVRAM.

It's the density in a single motherboard which makes or breaks the big memory market (as we have already seen with DRAM).

Electrical power consumption (wattage) is more important than small differences in latency.

Because if you can get more memory into one box then you save the interbox fabric latency which already makes nand flash (as RAM) faster and cheaper than DRAM at scales of tens to hundreds of terabytes.

It's refreshing to see that there are so many genuinely different competing solutions being offered for the future memory fabric market.

Compatibility with SSDeverywhere software (and useful agility with the new big memory killer apps) along with rationally affordable granular value propositions for integration with the cloud will be just as important as any of the raw memory technologies we see assembled into cards and modules.

As I said at the close of 2016 - everything in the SSD market now affects everything else.

CNEX Labs has amassed $60 million for new SSD controller

image shows mouse at the one armed bandit - click to see VC funds in storage
VCs in SSDs
Editor:- March 15, 2017 - CNEX Labs today announced its Series C round of financing which brings total funding to date over $60 million. The company will use the funding for mass production and system integration for lead customers of its NVMe-compliant SSD controllers for hyperscale markets. The new controllers will enable full host control over data placement, I/O scheduling, and other application-specific optimizations, in both kernel and user space.

See also:- adaptive intelligence flow symmetry (1 of 11 Key Symmetries in SSD design).

BeSang says 3D Super-DRAM could fix multi-billion dollar money pit of memory industry's fab capacity roadmap

Editor:- March 15, 2017 - Just as we're starting to get used to a world view that memory fabrication capacity may not be enough to make all the memory parts needed - and that a pragmatic global optimization from the user point of view may be to plan ahead for advanced memory systems which use tiering, flash as RAM, freshly minted shiny nvms and new SSD aware software to get more storage and processing done with less chips - a journey which - depending who you are - begins or ends with the idea of reducing the ratio of DRAM to storage - and just as we're getting our heads adjusted to the huge investments which would be needed to make DRAM technology better and to believe that no sane investor (not even a VC who loves SSDs) would want to toss their money in that direction - a seemingly different alternate get out of jail free option is offered in a new blog by Sang-Yun Lee, CEO - BeSang - in EE Times - Why 3D Super-DRAM?.

Among other things Sang says...

"If you consider planar DRAM shrinking from 18nm to 16nm, then, 20% more dice-per-wafer could be achieved. To do so, multi-billion dollar should be invested for R&D and EUV is required. In case of 3D Super-DRAM, it needs less than $50 million for R&D and no EUV; and even so, it could produce 400% more die-per-wafer."

And at the risk of repeating some of that:- 4x as much DRAM from the same fabs without huge investments... How is that possible? the article

Editor's comments:- You can get an idea of the complex decision matrices facing memory makers. In past decades the product types which determined the demand mix for memories (PCs, phones, servers) were few in number and had predictable roadmaps. Now big demands for memory are coming from cloud, IoT and new intelligence based markets which are creating entirely new ratios and rules of what is possible with memory systems.

new edition - the Top SSD Companies

Editor:- March 10, 2017 - today published the new 39th quarterly edition of the Top SSD Companies.

Hyperstone, NVMdurance and SymbolicIO all made their first appearances in this list.

Although a lot has changed in the past 10 years of tracking future SSD winners in this series the next wave of dusruptive change in memory systems architecture has barely begun. the article

a new name in SSD fabric software

Editor:- March 8, 2017 - A new SSD software company - Excelero - has emerged from stealth today.

Excelero which describes itself as - "a disruptor in software-defined block storage" announced version 1.1 of its NVMesh® Server SAN software "for exceptional Flash performance for web and enterprise applications at any scale."

The company was funded in part by Fusion-io's founder David Flynn.

Editor's comments:- An easy way to understand what this kind of software can do for you is to see how Excelero created a petabyte-scale shared NVMe pool for exploratory computing for an early customer - NASA/Ames. The mitigation of latency and bandwidth penalties enabled by the new environment enabled "compute nodes to access data anywhere within a data set without worrying about locality" and helped to change the way that researchers could interact with the data sets which previously had been constrained in many small islands of low latency. the white paper (pdf).

SSD fabrics - companies and past mentions
NVMe over Fabric and other SSD ideas which defined 2016
Inanimate Power, Speed and Strength Metaphors in SSD brands

Everspin enters NVMe PCIe SSD market

Editor:- March 8, 2017 - Everspin today announced it is sampling its first SSD product an HHHL NVMe PCIe SSD with upto 4GB ST-MRAM based on the company's own 256Mb DDR-3 memory.

The new nvNITRO ES2GB has end to end latency of 6µS and supports 2 access modes:- NVMe SSD and memory mapped IO (MMIO).

Everspin says that products for the M.2 and U.2 markets will become available later this year. And so too will be higher capacity models using the company's next generation Gb DDR-4 ST-MRAM.

Editor's comments:- Yes - you read the capacity right. That's 4GB not 4TB and certainly not 24TB.

So why would you want a PCIe SSD which offers similar capacity to a backed RAM SSD from DDRdrive in 2009? And the new ST-MRAM SSD card also offers worse latency, performance and capacity than an typical hybrid NVDIMM using flash backed DRAM today.

What's the application gap?

The answer I came up with is fast boot time.

If you want a small amount of low latency, randomly accessible persistent memory then ST-MRAM has the advantage (over flash backed DRAM such as you can get from Netlist etc) that the data which was saved on power down doesn't have to be restored from flash into the DRAM - because it's always there.

The boot time advantage of ST-MRAM grows with capacity. And depending on the memory architecture can be on the order of tens of seconds.

So - if you have a system whose reliability and accessibility and performance depends on healing and recovery processes which take into account the boot times of its persistent memory subsystems - then you either have the choice of battery backup (which occupies a large space and maintenance footprint) or a native NVRAM.

The new cards will make it easier for software developers to test persistent RAM tradeoffs in new equipment designs. And also will provide an easy way to evaluate the data integrity of the new memories.
What happened before?

storage search banner

SSD news page image - click to  enlarge

Michelangelo found David inside a rock.
Megabyte was looking for a solid state disk.
(see the original 1998 larger image)
industrial mSATA SSD
industrial grade mSATA SSDs
>2 million write cycles per logical block.
from Cactus Technologies

related guides
12 years ago in SSD news
In April 2005 - Texas Memory Systems (which made the world's fastest SSDs) offered the world's 1st performance guarantees for enterprise solid state storage systems.

The company said "if the RamSan unit does not accelerate the software application performance to a level acceptable to the customer, the RamSan unit may be returned within 30 days of delivery and Texas Memory Systems will refund all related Texas Memory Systems hardware charges, minus a 10% restocking fee."
AccelStor NeoSapphire  all-flash array
1U enterprise flash arrays
InfiniBand or 10GbE iSCSI or 16G FC
NeoSapphire series - from AccelStor

related guides
The industry will learn a lot about the "goodness" of new memory tiering products by stressing them in ways which the original designers never intended.
RAM disk emulations in "flash as RAM" solutions
Seagate Nytro PCIe SSD
PCIe SSDs for a wide range of deployments
Nytro flash accelerator cards
from Seagate
after AFAs? - the next box
Throughout the history of the data storage market we've always expected the capacity of enterprise user memory systems to be much smaller than the capacity of all the other attached storage in the same data processing environment.

after AFAs - click to read rhe articleA new blog on - cloud adapted memory systems - asks (among other things) if this will always be true.

Like many of you - I've been thinking a lot about the evolution of memory technologies and data architectures in the past year. I wasn't sure when would be the best time to share my thoughts about this one. But the timing seems right now. the article

Targa Series 4 - 2.5 inch SCSI flash disk
2.5" removable military SSDs
for airborne apps - GbE / SATA / USB
from Targa Systems

related guides

"The MLB Network uses Tegile flash storage in their post-production environment. During the regular season, they need to record all of the games and produce content for shows like MLB Tonight, The Rundown, Intentional Talk, MLB Now, and Quick Pitch, which focus on the day's activities and give a snapshot of what's going on around the league. In the off-season, they produce ... other programming that goes behind the daily game and into more of the storytelling about baseball. That's over 500,000 hours of digital content!"
Brandon Farris, Director of Marketing Tegile Systems in his blog - Flash Storage Goes to Hollywood (March 7, 2017 )

"We are at a junction point where we have to evolve the architecture of the last 20-30 years. We can't design for a workload so huge and diverse. It's not clear what part of it runs on any one machine. How do you know what to optimize? Past benchmarks are completely irrelevant."
Kushagra Vaid, Distinguished Engineer, Azure Infrastructure - quoted in a blog by Rambus - Designing new memory tiers for the data center (February 21, 2017)

RAM has changed from being tied to a physical component to being a virtualized systems software idea - and the concept of RAM even stretches to a multi-cabinet memory fabric.
what's RAM really? - RAM in an SSD context

All the marketing noise coming from the DIMM wars market (flash as RAM and Optane etc) obscures some important underlying strategic and philosophical questions about the future of SSD.
where are we heading with memory intensive systems?

I think it's not too strong to say that the enterprise PCIe SSD market (as we once knew it) has exploded and fragmented into many different directions.
what's changed in enterprise PCIe SSD?
The same memory block may have different ECC codes wrapped around it at different times in its operating life - depending how healthy it looks. And different ECC codes may be used within the same memory chip at the same time.
Adaptive flash care management & DSP IP in SSDs