click to visit home page
leading the way to the new storage frontier
after AFAs - what's next?
a winter's tale of SSD market influences
endurance? - changes in the forever war
Capacitor hold up times in 2.5" military SSDs
where are we heading with memory intensive systems?
why you might like RAMdisk emulation in "flash as RAM"
optimizing CPUs for use in SSDs in the Post Modernist Era

M.2 PCIe SSDs for secure rugged applications?
Editor:- March 20, 2017 - Do you know who makes M.2 PCIe SSDs which can operate at industrial temperatures and have security strong enough for a military application?

That's a question I was asked recently by a reader in the defense sector.

So I looked into it. He was right. They are hard to find. Nearly all the industrial M.2 SSDs are SATA and not PCIe.

The only companies which I have been able to confirm in this category (by direct contact rather than a promissory future product statement on a web page) are:- I became interested in the technical difficulties which might explain why there are so few suppliers right now.

Here's what I think is part of the explanation.

As you add operational requirements to the datasheet moving up from consumer to enterprise and then to industrial SSDs you also add circuits and components which compete for physical space, electrical power and cost in the total SSD design budget.
  • use of larger flash memory cell geometry (nanometer generation and coding type - for example SLC rather than MLC, or MLC rather than TLC) to ensure data integrity over a wider range of temperature and power supply quality environments
  • use of different flash SSD controllers

    Consumer and enterprise SSDs can use controllers which use more electrical power than industrial or embedded SSDs due to the ease of fitting the design into the heat rise budget.

    Industrial designs can't afford the same wattage in their controllers - because the heat generated would reduce the reliability of the SSD at the high end of its operating temperature range (70 to 85 degress C and sometimes 95 degrees) - or force the use of more expensive components elsewhere (to cope with the incremental heat rise.

    The tradeoffs made (typically lower wattage controller) is why industrial SSDs tend not to use CPU intensive data integrity management schemes like adaptive DSP. And that in turn means they need to use intrinsically higher quality memory.
When you add all the requirements together to make an industrial / military SSD capable of working reliably and shrink the size budget from a bigger to smaller form factor (2.5" to M.2) while at the same time asking for high performance too - it's a tough design problem to solve for the first time.

But once such products do became available from multiple sources then demand will grow (due to confidence in the equipment design community that they won't get stuck in an EOL rut from a single source dependency).

If you know of other secure erase, industrial operation M.2 PCIe SSD companies which are shipping products let me know and I'll mention them here.

I placed a query via linkedin but that didn't generated any other confirmed vendors.
SSD ad - click for more info
who will make enough flash?
Editor:- March 14, 2017 - A fab capacity view of the flash industry's ability to meet demand for memory this year is presented in - Shootout At Yokkaichi - the NAND Industry at the Crossroads by William Tidwell, Semiconductor Analyst who regularly writes about such things on Seeking Alpha.

Among other things Bill discusses the state of production maturity within the major memory companies in their transitions to 3D and says:-

"Industry productivity is still low due to a condition that could be called planar overhang - that being the amount of planar capacity that must be converted as fast as possible to 3D, so the company can take advantage of the denser 3D process. Unfortunately, this conversion process from planar to 3D is basically like buying a house that has to be completely renovated and then finding out that load-bearing walls are involved - and the foundation has to be reinforced."

The article's central theme is the imminent auction of Toshiba's flash assets, the main competitors and possible bidders, winners and losers.

Along the way you get a good feel for the investment and production dynamics which will shape the next few years of this industry. the article

historical timeline of 3D NAND flash memory

will there be enough flash to replace enterprise HDD?

boom bust cycles in memory markets - lessons for SSD

nand flash memory and other SSDward leaning nvms too
"In the most recent quarter (ending January 31, 2017) we had more than one customer running large scale simulations and analytics replace over 20 racks (think 20 refrigerators of equipment) with a single FlashBlade (at 4U about the size of a microwave oven).

Such dramatic consolidation depends on storage software that has been designed for silicon rather than mechanical disk."
Scott Dietzen, CEO - Pure Storage - in his blog Delivering the data platform for the cloud era and the secular shift to flash memory (March 1, 2017)

Editor's comments:- this is another confirmation of the replacement ratio predictions in my (2013) blog - meet Ken - and the enterprise SSD software event horizon.

PS - Another thing which Scott Dietzen said in his new blog was...

"This year, the 8th since our founding and our 6th of selling, we expect to reach $1 billion in revenues."
SSD ad - click for more info
Soft-Error Mitigation for PCM and STT-RAM
Editor:- February 21, 2017 - There's a vast body of knowledge about data integrity issues in nand flash memories. The underlying problems and fixes have been one of the underpinnings of SSD controller design. But what about newer emerging nvms such as PCM and STT-RAM?

You know that memories are real when you can read hard data about what goes wrong - because physics detests a perfect storage device.

A new paper - a Survey of Soft-Error Mitigation Techniques for Non-Volatile Memories (pdf) - by Sparsh Mittal, Assistant Professor at Indian Institute of Technology Hyderabad - describes the nature of soft error problems in these new memory types and shows why system level architectures will be needed to make them usable. Among other things:-
  • scrubbing in MLC PCM would be required in almost every cycle to keep the error rate at an acceptable level
  • read disturbance errors are expected to become the most severe bottleneck in STT-RAM scaling and performance
MRAM and PCM data integrity issues click to read the article (pdf)
He concludes:- "Given the energy inefficiency of conventional memories and the reliability issues of NVMs, it is likely that future systems will use a hybrid memory design to bring the best of NVMs and conventional memories together. For example, in an SRAM-STT-RAM hybrid cache, read-intensive blocks can be migrated to SRAM to avoid RDEs in STT-RAM, and DRAM can be used as cache to reduce write operations to PCM memory for avoiding WDEs.

"However, since conventional memories also have reliability issues, practical realization and adoption of these hybrid memory designs are expected to be as challenging as those of NVM-based memory designs. Overcoming these challenges will require concerted efforts from both academia and industry." the article (pdf)

Editor's comments:- Reading this paper left me with the confidence that I was in good hands with Sparsh Mittal's identification of the important things which need to be known.

If you need to know more he's running a one day workshop on Advanced Memory System Architecture March 4, 2017.

See also:- an earlier paper by Sparsh Mittal - data compression techniques in caches and main memory
SSD ad - click for more info
Getting acquainted with the needs of new big data apps
Editor:- February 13, 2017, 2017 - The nature of demands on storage and big memory systems has been changing.

A new slideshare - the new storage applications by Nisha Talagala, VP Engineering at Parallel Machines provides a strategic overview of the raw characteristics of dataflows which occur in new apps which involve advanced analytics, machine learning and deep learning.

It describes how these new trends differ to legacy enterprise storage patterns and discusses the convergence of RDBMS and analytics towards continuous streams of enquiries. And it shows why and where such new demands can only be satisfied by large capacity persistent memory systems.
slideshare by Parallel Systmes - memory and storage demands from new real time analytics and other new apps
Among the many interesting observations:-
  • Quality of service is different in the new apps.

    Random access is rare. Instead the data access patterns are heavily patterned and initiated by operations in some sort of array or matrix.
  • Correctness is hard to measure.

    And determinism and repeatability is not always present for streaming data. Because for example micro batch processing can produce different results depending on arrival time versus event time. (Computing the right answer too late is the wrong answer.)
Nisha concludes "Opportunities exist to significantly improve storage and memory for these use cases by understanding and exploiting their priorities and non-priorities for data." the article

SSD software news
where are we heading with memory intensive systems?
SSD ad - click for more info
Xitore envisions NVDIMM tiered memory evolution
Editor:- February 7, 2017, 2017 - "Cache based NVDIMM architectures will be the predominant interface overtaking NVMe within the next 5-10 years in the race for performance" - is the concluding message of a recent presentation by Doug Fink , Director of Product Marketing - Xitore - Next Generation Persistent Memory Evolution - beyond the NVDIMM-N (pdf)

NVDIMM adoption and evolution paper Xitore

Among other things Doug's slides echo a theme discussed before - which is that new memory media (PCM, ReRAM, 3DXpoint) will have to compete in price and performance terms with flash based alternatives and this will slow down the adoption of the alt nvms.

Editor's comments:- Xitore (like others in the SCM DIMM wars market) is working on NVDIMM form factor based solutions and in this and an earlier paper they provide a useful summary of the classifications in this module category.

However, the wider market picture is that the retiring and retiering DRAM story cuts across form factors with many other permutations of feasible implementation possible.

So - whereas the NVDIMM is a seductively convenient form factor for systems architects to think around - the competitive market for big memory will use anything from SSDs on a chip upto (and including) populations of entire fast rackmount SSD boxes as part of such tiered solutions - if the economics, scale, interface fabric and software make the cost, performance and time to market sums emerge in a viable zone of business risk and doability.

SSD news
storage market research
RAM ain't what it used to be
The industry will learn a lot about the "goodness" of new memory tiering products by stressing them in ways which the original designers never intended.
RAM disk emulations in "flash as RAM" solutions
Virtium logo - click for more info


SSD news / the Top SSD Companies / SSD history

CPUs for use with SSDs in the Post Modernist Era of SSD

Editor:- March 22, 2017 - A new blog on - optimizing CPUs for use with SSDs in the Post Modernist Era of SSD and Memory Systems - was prompted by a question from a startup which is designing new processors for the SSD market. the article

NVMdurance has US patent for Adaptive Flash Tuning

Editor:- March 21, 2017 - NVMdurance today announced that it has been granted US patent 9,569,120 for Adaptive Flash Tuning.

This patent covers NVMdurance's Pathfinder and Navigator software, which discover optimal flash trim sets for the target application and implement a set of optimization techniques that constantly monitor the NAND flash health and autonomically adjusts the operating parameters in real time.

Before the flash memory product goes into production, NVMdurance Pathfinder determines multiple sets of viable flash register values, using a custom-built suite of machine-learning techniques. Then, running on the flash memory controller utilized in SSDs or other storage product, NVMdurance Navigator chooses which of these predetermined sets to use for each stage of life to increase the flash memory endurance.

Editor's comments:- The things which make NVMdurance's technology processes a viable business model for SSD partners are that the heavyweight processing is done back at HQ as part of the memory characterization and controller modeling which means that the delivery overhead in each shipped product is lightweight and protects the stakeholder's IP.

And another thing is that no one has come up with any better ideas for a way to roll out a new SSD with new flash memory encapsulated in such a predictable set of algorithmically bounded phases which reduces the worst risks (of delay and misfire) which come from picking such magic numbers via the organic talent (human) alternatives.

See also:- Adaptive flash care management & DSP IP in SSDs, the limericks of SSD endurance, the 5 stage life cycle budget of extended flash endurance (pdf)

CNEX Labs has amassed $60 million for new SSD controller

image shows mouse at the one armed bandit - click to see VC funds in storage
VCs in SSDs
Editor:- March 15, 2017 - CNEX Labs today announced its Series C round of financing which brings total funding to date over $60 million. The company will use the funding for mass production and system integration for lead customers of its NVMe-compliant SSD controllers for hyperscale markets. The new controllers will enable full host control over data placement, I/O scheduling, and other application-specific optimizations, in both kernel and user space.

See also:- adaptive intelligence flow symmetry (1 of 11 Key Symmetries in SSD design).

BeSang says 3D Super-DRAM could fix multi-billion dollar money pit of memory industry's fab capacity roadmap

Editor:- March 15, 2017 - Just as we're starting to get used to a world view that memory fabrication capacity may not be enough to make all the memory parts needed - and that a pragmatic global optimization from the user point of view may be to plan ahead for advanced memory systems which use tiering, flash as RAM, freshly minted shiny nvms and new SSD aware software to get more storage and processing done with less chips - a journey which - depending who you are - begins or ends with the idea of reducing the ratio of DRAM to storage - and just as we're getting our heads adjusted to the huge investments which would be needed to make DRAM technology better and to believe that no sane investor (not even a VC who loves SSDs) would want to toss their money in that direction - a seemingly different alternate get out of jail free option is offered in a new blog by Sang-Yun Lee, CEO - BeSang - in EE Times - Why 3D Super-DRAM?.

Among other things Sang says...

"If you consider planar DRAM shrinking from 18nm to 16nm, then, 20% more dice-per-wafer could be achieved. To do so, multi-billion dollar should be invested for R&D and EUV is required. In case of 3D Super-DRAM, it needs less than $50 million for R&D and no EUV; and even so, it could produce 400% more die-per-wafer."

And at the risk of repeating some of that:- 4x as much DRAM from the same fabs without huge investments... How is that possible? the article

Editor's comments:- You can get an idea of the complex decision matrices facing memory makers. In past decades the product types which determined the demand mix for memories (PCs, phones, servers) were few in number and had predictable roadmaps. Now big demands for memory are coming from cloud, IoT and new intelligence based markets which are creating entirely new ratios and rules of what is possible with memory systems.

new edition - the Top SSD Companies

Editor:- March 10, 2017 - today published the new 39th quarterly edition of the Top SSD Companies.

Hyperstone, NVMdurance and SymbolicIO all made their first appearances in this list.

Although a lot has changed in the past 10 years of tracking future SSD winners in this series the next wave of dusruptive change in memory systems architecture has barely begun. the article

a new name in SSD fabric software

Editor:- March 8, 2017 - A new SSD software company - Excelero - has emerged from stealth today.

Excelero which describes itself as - "a disruptor in software-defined block storage" announced version 1.1 of its NVMesh® Server SAN software "for exceptional Flash performance for web and enterprise applications at any scale."

The company was funded in part by Fusion-io's founder David Flynn.

Editor's comments:- An easy way to understand what this kind of software can do for you is to see how Excelero created a petabyte-scale shared NVMe pool for exploratory computing for an early customer - NASA/Ames. The mitigation of latency and bandwidth penalties enabled by the new environment enabled "compute nodes to access data anywhere within a data set without worrying about locality" and helped to change the way that researchers could interact with the data sets which previously had been constrained in many small islands of low latency. the white paper (pdf).

SSD fabrics - companies and past mentions
NVMe over Fabric and other SSD ideas which defined 2016
Inanimate Power, Speed and Strength Metaphors in SSD brands

Everspin enters NVMe PCIe SSD market

Editor:- March 8, 2017 - Everspin today announced it is sampling its first SSD product an HHHL NVMe PCIe SSD with upto 4GB ST-MRAM based on the company's own 256Mb DDR-3 memory.

The new nvNITRO ES2GB has end to end latency of 6µS and supports 2 access modes:- NVMe SSD and memory mapped IO (MMIO).

Everspin says that products for the M.2 and U.2 markets will become available later this year. And so too will be higher capacity models using the company's next generation Gb DDR-4 ST-MRAM.

Editor's comments:- Yes - you read the capacity right. That's 4GB not 4TB and certainly not 24TB.

So why would you want a PCIe SSD which offers similar capacity to a backed RAM SSD from DDRdrive in 2009? And the new ST-MRAM SSD card also offers worse latency, performance and capacity than an typical hybrid NVDIMM using flash backed DRAM today.

What's the application gap?

The answer I came up with is fast boot time.

If you want a small amount of low latency, randomly accessible persistent memory then ST-MRAM has the advantage (over flash backed DRAM such as you can get from Netlist etc) that the data which was saved on power down doesn't have to be restored from flash into the DRAM - because it's always there.

The boot time advantage of ST-MRAM grows with capacity. And depending on the memory architecture can be on the order of tens of seconds.

So - if you have a system whose reliability and accessibility and performance depends on healing and recovery processes which take into account the boot times of its persistent memory subsystems - then you either have the choice of battery backup (which occupies a large space and maintenance footprint) or a native NVRAM.

The new cards will make it easier for software developers to test persistent RAM tradeoffs in new equipment designs. And also will provide an easy way to evaluate the data integrity of the new memories.

Toshiba was fastest growing SSD vendor in 2016 says IDC

Editor:- March 8, 2017 - The flash business unit of Toshiba - which may be called something different depending when you read this - has announced that its SSD business was the 4th largest by market share and the fastest growing (year on year) in 2016 according to data in a report - Worldwide Solid State Storage Quarterly Update, CY 4Q16 ($40,000) - published recently by IDC.

HP values Nimble at $109K / customer

Editor:- March 7, 2017 - HP today announced it has agreed to acquire Nimble Storage for just over $1 billion.

Kingston ships HHHL NVMe PCIe SSD using Liqid controller

Editor:- March 7, 2017 - Kingston today announced shipments of a another new NVMe PCIe SSD based on its partnership with Liqid. The DCP1000 has a Gen. 3.0 x8 interface and delivers upto 6.8GB/s and 6GB/s sequential R/W throughput respectively. The HHHL form factor SSD has upto raw 3.2TB capacity and is rated at under 0.5 DWPD for 5 years.

would you buy an NVMe SSD array for $1 million a pop?

Editor:- March 3, 2017 - Dell EMC has end of lifed the DSSD product line (an NVMe array and one of the fastest SSD systems in the market) and the storyline discussed in is the missmatch between Dell's high volume commodity business and this niche HPC storage box.

The warm up to such an ending came in a news story in December 2016 by the Register which revealed the $1 billion gap between the cost of acquiring and developing the product and sales ($6 million).

Editor's comments:- In the short term this is good news for IBM's FlashSystem which is the most mature storage product line in this class.

And it's good news for startups and other specialist SSD companies which engage with the high performance end of the market.

One question I guess about the DSSD product line is that the market which it might have been aimed at 3 years ago doesn't exist any more.

Most computer companies who would be looking for HPC storage of the NVMe array variety are easily able to produce such systems from a competitive market of 2.5" NVMe SSDs. So why pay a premium to EMC or anyone else?

But a more deep rooted problem is that the DSSD is an old fashioned systems designer's prototype implementation of a modern persistent memory box. And the nvm memory changes in recent years (in cell technology and controllernomics tiering) makes the design about as useful as a TTL minicomputer competing with an NMOS microprocessor.

No matter how much cooling or SRAM you pack into a card - the cheapest place to solve latency problems is in the semiconductor chip itself before the data hits the external brake pads of the physical interface to copper.

We're going to see a lot of different permutations of big memory coming into the market. Generally the smaller the box and the closer it is to the applications processor the less waste there is in intersystems latency.

The DSSD approach has been blown away by commodity arrays at the low end of its performance ramge and by genuine memory systems technology advances at the high end.

Storage systems thinking can't compete for performance with semiconductor integrated memory systems architecture.

And here's another angle...

If you were looking for a low cost companion ultra fast compute box to work as a companion to DSSD class storage - Symbolic IO have got their own way to do it and can do it faster with less hardware.

Parallel Machines gets patent for recovering distributed memory

Editor:- March 2, 2017 - Parallel Machines has been assigned a patent related to healing broken data in shared memory pools according to a news report in

See also:- data recovery, SSD data integrity, security risks in persistent memory, fault tolerant SSDs

OpenMP is 20

Editor:- March 2, 2017 - The OpenMP ARB (Architecture Review Board) today announced the 20th anniversary of its incorporation. Since its advent in 1997, the OpenMP programming model has proved to be a key driver behind parallel programming for shared-memory architectures.

Editor's comments:- The way I remember it - the commoditization of enterprise grade multiprocessor architecture came a little time before that and was inspired by a company called Solbourne Computer which operated in the SPARC systems market.

Micron's SSDs tough enough for army use

Editor:- February 25, 2017 - Micron isn't a name that would sping to mind when thinking about military SSDs. Which is why I found a new applications white paper from Micron interesting.

Micron's IT SSDs withstand DRS' Toughest Tests (pdf) describes how DRS (which is a military SSD company) requalified an industrial SSD - M500IT - which had originally been designed for the automotive market so that it could be used in a large Army program. the article (pdf)

Symbolic IO reveals more

Editor:- February 25, 2017 - Symbolic IO is a company of interest which I listed in my blog - 4 shining companies showing the way ahead - but until this week they haven't revealed much publicly about their technology.

Now you can read details in a new blog - Symbolic IO reveals tech - written by Chris Mellor at the Register who saw a demo system at an event in London.

As previously reported a key feature of the technology is that data is coded into a compact form - effectively a series of instructions for how to create it - with operations using a large persistent memory (supercap protected RAM).

Among other things Chris reports that the demo system had 160GB of raw, effectively persistent memory capacity - which yielded with coding compression - an effective (usable) memory capacity of 1.79TB.

Security in the system is rooted in the fact that each system evolves its own set of replacement codes computed on the fly and held in persistent memory - without which the raw data is meaningless. A security sensor module located in a slot in the rack "the Eye" can erase the data relationships codes based on GPS and other boundary conditions being crossed (as in some fast purge SSDs). the article

Editor's comments:- The data compaction and therefore CPU utilization claims do seem credible - although the gains are likely to be applications dependent.

Throughout the data computing industry smart people are going back to first principles and tackling the embedded problems of inefficiencies and lack of intelligence which are buried in the way that data is stored and moved. The scope for improvement in CPU and storage utilization was discussed in my 2013 article - meet Ken - and the enterprise SSD software event horizon.

The potential for improvement is everywhere - not just in pre SSD era systems. For example Radian is picking away at inefficiencies caused within regular flash SSDs themselves by stripping away the FTL. Tachyum is aiming to push the limits of processing with new silicon aimed at memory centric systems. For a bigger list of companies pushing away at datasystems limitations you'd have to read the SSD news archive for the past year or so.

But all new approaches have risks.

I think the particular risks with Symbolic IO's architecture are these:-
  • Unknown vulnerability to data corruption in the code tables.

    Partly this would be like having an encrypted system in which the keys have been lost - but the effect of recovery would be multiplied by the fact that each raw piece of data has higher value (due to compacting).

    Conventional systems leverage decades of experience of data healing knowhow (and data recovery).

    We don't know enough about the internal resiliency architecture in Symbolic IO's design.

    It's reasonable to assume that there is something there. But all companies can make mistakes as we saw in server architecture with Sun's cache memory problem and in storage architecture when Cisco discovered common mode failure vulnerabilities in WhipTail 's "high availability" flash arrays.
  • Difficult to quantify risk of "false positive" shutdowns from the security system.

    This is a risk factor which I have written about in the context of the fast purge SSD market. Again this is a reliability architecture issue.
I expect that Symbolic will be saying much more about its reliability and data corruption sensitivities during the next few years. In any case - Symbolic's investment in its new data architecture will make us all rethink the bounds of what is possible from plain hardware.
What happened before?

storage search banner

SSD news page image - click to  enlarge

Michelangelo found David inside a rock.
Megabyte was looking for a solid state disk.
(see the original 1998 larger image)
industrial mSATA SSD
industrial grade mSATA SSDs
>2 million write cycles per logical block.
from Cactus Technologies

related guides
"The MLB Network uses Tegile flash storage in their post-production environment. During the regular season, they need to record all of the games and produce content for shows like MLB Tonight, The Rundown, Intentional Talk, MLB Now, and Quick Pitch, which focus on the day's activities and give a snapshot of what's going on around the league. In the off-season, they produce ... other programming that goes behind the daily game and into more of the storytelling about baseball. That's over 500,000 hours of digital content!"
Brandon Farris, Director of Marketing Tegile Systems in his blog - Flash Storage Goes to Hollywood (March 7, 2017 )
AccelStor NeoSapphire  all-flash array
1U enterprise flash arrays
InfiniBand or 10GbE iSCSI or 16G FC
NeoSapphire series - from AccelStor

related guides
if your cloud leveraged service is down - it's your fault
Editor:- March 7, 2017 - "If your business leverages AWS, and you had an outage or degraded operation during the massive AWS outage last week, you can only blame yourself" - says Erez Ofer, Partner at 83North in his new blog - No Excuses.

Among other things Erez Ofer says - "What is happening now after each cloud outage is a lot of learning by businesses on how to create systems that don't go down." the article

See also:- high availability SSD stories
Seagate Nytro PCIe SSD
PCIe SSDs for a wide range of deployments
Nytro flash accelerator cards
from Seagate
after AFAs? - the next box
Throughout the history of the data storage market we've always expected the capacity of enterprise user memory systems to be much smaller than the capacity of all the other attached storage in the same data processing environment.

after AFAs - click to read rhe articleA new blog on - cloud adapted memory systems - asks (among other things) if this will always be true.

Like many of you - I've been thinking a lot about the evolution of memory technologies and data architectures in the past year. I wasn't sure when would be the best time to share my thoughts about this one. But the timing seems right now. the article

Targa Series 4 - 2.5 inch SCSI flash disk
2.5" removable military SSDs
for airborne apps - GbE / SATA / USB
from Targa Systems

related guides

what does "serverless software" really mean?
Editor:- February 20, 2017 - Did you want a side of SLBS (server less BS) with your software or hardware FUD? - is the title of an amusing new blog by Greg Schulz founder StorageIO.

Editor's comments:- I'm not going to quote from Greg's blog. To see what he says you'll just have to read it.

At one and the same time it provides a funny and cuttingly serious analysis of what happens when marketers stray too far off the edge - reminiscent of scenes in looney toons.

See also:- Marketing Views

"We are at a junction point where we have to evolve the architecture of the last 20-30 years. We can't design for a workload so huge and diverse. It's not clear what part of it runs on any one machine. How do you know what to optimize? Past benchmarks are completely irrelevant."
Kushagra Vaid, Distinguished Engineer, Azure Infrastructure - quoted in a blog by Rambus - Designing new memory tiers for the data center (February 21, 2017)

RAM has changed from being tied to a physical component to being a virtualized systems software idea - and the concept of RAM even stretches to a multi-cabinet memory fabric.
what's RAM really? - RAM in an SSD context

All the marketing noise coming from the DIMM wars market (flash as RAM and Optane etc) obscures some important underlying strategic and philosophical questions about the future of SSD.
where are we heading with memory intensive systems?

I think it's not too strong to say that the enterprise PCIe SSD market (as we once knew it) has exploded and fragmented into many different directions.
what's changed in enterprise PCIe SSD?
The same memory block may have different ECC codes wrapped around it at different times in its operating life - depending how healthy it looks. And different ECC codes may be used within the same memory chip at the same time.
Adaptive flash care management & DSP IP in SSDs