| leading the way to the
new storage frontier||...|
|after AFAs? - the
a winter's tale
of SSD market influences
popular SSD articles
hold up times in 2.5" military SSDs
big physical memory and the study of
|"At the technology
level, the systems we are building through continued evolution are not advancing
fast enough to keep up with new workloads and use cases. The reality is that
the machines we have today were architected 5 years ago, and ML/DL/AI uses in
business are just coming to light,
industry missed a need."|
|From the blog -
Memory Centric Architecture by Robert Hormuth,
VP/Fellow and Server CTO - Dell
EMC (January 26, 2017)|
competing semiconductor approaches compared |
|Editor:- January 10, 2017 - In a new video
Storage Class Memory -
Reality, Opportunity, and Competition - Sang-Yun Lee,
CEO - BeSang
presents his analysis of the technology SWOT state of the market.|
other things Sang-Yun Lee (whose company offers 3D super-NOR as an alternative
competing SSD and SCM technology platform) notes the weaknesses of some
- when looking at cross-point structure memories (such as
Micron's 3DXpoint) - "is
the worst nightmare for manufacturing"
Editor's comments:- In
some significant areas I disagree with the finality of some of of Mr. Lee's
- when looking at NVDIMM-P (such as
Diablo's Memory 1) - "performance
is not predictable at all times"
For example I think that changes in system aware
software can improve the usability of nand flash as DRAM. This is because I
think the applications experience leans more heavily towards the elastic
behavior of the entire "virtual" memory system as a working model
rather than transmitting every bump in the road from the native physical memory
(even when that memory is DRAM) for reasons discussed
Also when it comes to concern about the
nand flash when used in DRAM emulation - I am satisfied that due to the
variability of DRAM data churn (which follows a time and fractional data change
pattern in real applications - rather than all DRAM contents being equally
turbulent) and provided that the emulated data capacity is big enough (and
supported by a suitably sized
RAM cache ratio)
then I think that flash endurance is good enough for reasons discussed
the other hand - due to the fact that SCM applications currently require a
great deal of characterization and testing and may require proprietary tuning -
and due to the
nature of risk / reward attitudes in the enterprise user base (which we
already see when it comes to memory systems) I expect fragmentation will
occur in SCM adoption.
On the one hand there will be those who are
satisfied with the risks posed by software enhanced DRAM emulation (because
they have the technical resources to assess the risks and have applications
which match the software supported by early SCM solutions).
the other hand there will be many who prefer to wait to get solutions which
rely more on native hardware and rely less on the magic promised by new
software data architectures.
When memory technologies change then
systems designers have to invest learning time to understand the implications of
competing offers. And whatever your background Sang-Yun Lee's presentation
will get you thinking about many important comparative technology issues
...see the video
also:- the SSD heresies
|A3CUBE and memory
|Editor:- January 10, 2017 - When A3CUBE started talking
about supporting big memory fabrics with PCIe (in
there weren't too many
choices out there. |
Now in 2017 the
SSD and SCM news pages are
awash with announcements about big memory systems. And growing industry support
for NVMe over Fabric was one of the big market developments in
already seeing signs of clear fragmentation in the memory fabric market
(mostly via server based interface expansion preferences such as
GbE but some of the memory
applications are also being cannibalized by tiered memory, new semiconductor
memory solutions and DIMM wars.)
In this context it was interesting to
see a recent video
(January 2017) from A3CUBE which shows how their PCIe connected shared memory
fabric can work with NVMe components too. ...see the video
|"IoT storage must be
distributed. You can't think about a single storage device but, on the
contrary, a multitude of devices with a small amount of storage can easily be
part of a large distributed storage system.
It's a compelling idea but this approach has its challenges. Thousands
of nodes for just hundreds of terabytes of storage?
It means massive
scalability, a lot of node rebalancing when a node disappears, complex node
discovery and management that could impact performance."
|Interesting ideas from the blog -
ready for the post cloud era - by OpenIO.
(January 10, 2017)|
|re Violin Memory|
January 30, 2017 - A
on LAW360 provides an interim
update on the bankruptcy auction for Violin Memory and says
Violin's assets were valued "at least $14.5 million".
comments:- This is a humbling end for a company whose CEO said
6 years ago
that he hoped to build a billion dollar company.
You can read more
about Violin's past and the highs and lows and what was said at the time in
history sections of Violin's
company profile page here in StorageSearch.com.
Toshiba's semiconductor business rescues company
January 25, 2017 - Toshiba
was featured in the mainstream news media last week due to losses by its US
nuclear power business which had been responsible for halving the value of
shares in the company.
that Toshiba's financial fix centered around selling part of its
semiconductor business to Western Digital got
a good reaction from markets.
Datrium celebrates one year of NVMe flash difference in its
open converged platform
Editor:- January 24, 2017 - A recent
release from Datrium -
celebrating one year of supporting NVMe SSDs within its high availability open
convergence server storage (software)
(pdf) - discusses
which are inherent in legacy rooted storage architectures in AFAs which
are implemented by SAS or SATA SSDs in comparison to native NVMe SSDs.
benefit of NVMe drives - blistering performance - is unavailable on most
storage arrays today for two reasons. First, an array or hyperconverged design
cycle can only adopt new drive connectivity approaches at a certain rate. As a
rigid, composed system, it takes time. Second, successful flash array vendors
depend on data reduction to optimize pricing. This means the controller CPU
must filter data inline, which adds delay. The benefits of NVMe are
subsequently small because the benefits over SAS links are bottlenecked by CPU
Editor's comments:- The message of
the company seems to be that whereas modern flash storage systems undeniably
have done a great job at reducing infrastructure costs (compared to old style
HDD systems) there is still much more performance and utilization which can be
extracted from COTS servers and SSDs when they're working in a modern
architecture with modern software. See their
2 minute video for the key claimed
The extent of this next level up in performance,
utilization and efficiency (as an industry aspiration) was part of what I was
hinting at in my 2013 article -
meet Ken - and the
enterprise SSD software event horizon.
Primary Data gets ready to expand sales
January 20, 2017 - Primary
that Robert Wilson
has joined the company as the company's new Head of Sales. He previously had
VP level sales roles for
Wilson said - "With flash and cloud storage now common,
and only so much innovation ahead in appliances, many in the storage industry
are wondering what technology breakthrough is coming next."
NVMe now, NVDIMM coming - says Web-Feet Research
January 16, 2017 - Web-Feet
it has released the 12th annual edition of its SSD market reports - SSD
Markets and Applications 2017 ($5,500)
- which concentrates on the enterprise market with the emergence of PCIe, NVMe
M.2) and NVDIMM SSDs as well
as quantifying the client and commercial markets. It addresses the difficulty
of migrating from HDD
to SSD and to Hybrid
and All Flash
storage systems while advancing the 'intelligent processing' of memory and
, CEO - Web-Feet Research says (among other things)
the adoption of new interfaces like NVMe and the
and embracing emerging media like 3D NAND and Persistent Memory (ReRAM / XPoint)
the industry is undergoing a transformation. The old computing storage model is
unable to keep up with the amount of data needed to be stored. It needs to merge
storage into memory in order to process in real time more complex data, diverse
data types and much higher volume of data anticipated through 2021.
"Even though NAND-Flash based SSDs perform at several orders of
magnitude higher than hard-disk-drives
they suffer from the same non-deterministic inadequacies as compared to
solutions based on XPoint memory. Flash memory based SSDs suffer the
indeterminate delay inserted by the
Translation Layer in its
production of the of
physical block addresses. This is one reason that All-Flash-Array architects
like Pure Storage
desire additional physical control at the FTL. Additional storage system
functions such as deduplication, compression, error coding, power-on fill, data
recovery ops, check pointing, and scrubbing are further accelerated by this
Crossbar samples 8Mb ReRAM
Editor:- January 12, 2017
- A report in EE Times Europe
ReRAM in production at SMIC - says that Crossbar is sampling
8Mb ReRAM (its byte writable alt nvm) with R/W latency about 20nS and 12nS
respectively and endurance north of 100K cycles.
The 8Mb chips use
40nm CMOS processing and the company plans to offer its nvm IP as cores which
can be integrated in SoCs so as to make best use of the low latency.
told EE Times Europe that the early customers would be characterizing the new
memory and assessing its reliability. This is an important hurdle for any
new memory technology to cross before designers can have the confidence to
integrate them into commercial products. ...read
NVDIMM market report
Editor:- January 11, 2017 - The
NVDIMM market is estimated to grow at 64% CAGR over the course of 2016 to
2020 according to 9Dimen
Research who recently published a report
Global NVDIMM Industry
2016, Trends and Forecast Report ($2,850, 153 pages).
also:- who's who
in storage market research?
Another $75 million to support Kaminario's business outlook
January 10, 2017 - Kaminario
it has secured $75 million in a new round of financing, bringing the company's
total funding to $218 million.
company which is privately owned and doesn't disclose revenue says "Hundreds
of customers rely on Kaminario K2 to power their mission critical applications
and safeguard their digital ecosystem."
Kaminario has changed the internal make up of its flash drives (form
factors, interfaces and components) in its arrays many times and has said in
the past that its systems are based on an SDS model.
Today's news of
continuing investment in the company seems to be a bet that whatever the
enterprise memory systems market of the future might look like any
vendor which can grow its sales through multiple transitions of raw technology
uncertainty is valued.
June 2012 -
when writing about Kaminario's long range philosophy about the SSD market I said
they were a rare example of a systems company which had good roadmap symmetry
(having an architecture and software which was not closely tied to the
advantages of any particular memory type or SSD form factor - but which could
plausibly leverage future market improvements in SSDs with smaller bumps than
vendors who had over optimized their systems to leverage transient technology
This is as much about choosing the
applications and segments as designing the business plan. Because some
customer segments are so price and performance sensitive that only well
adapted memory systems can compete and sell in such applications.
always comes back to
in the end.
Foremay fires patent warning post about flash data destruct
Editor:- January 10, 2017 - If you're seriously interested
in data security in SSDs you'll already know that encryption is simply a
promise to delay access to secured data rather than a guarantee that it will
remain denied to those who shouldn't see it. That's why the
SSD fast purge /
autonomous data destruct / fast secure erase market has developed so many
ingenious ways to offer better security assurance - which you can pick to
match your deployment's time to erase, electrical power to erase and
monetary cost budget.
post on linkedin by Dennis
Eodice VP Strategic Sales - Foremay - who says
the company has a patent -
- for a technique which physically destroys the nand flash in an SSD using
addressibly directed high voltage.
The implied message being that if
any other companies have used similar techniques to secure SSDs which are
sold in other regions - Foremay thinks this patent is enforceable to prevent
this technique being used in competing SSDs sold in the US.
How do banks use big memory systems to detect fraud?
January 9, 2017 - In the early 2000s I started hearing stories from vendors
of ultrafast SSDs
about how their fast memory systems were helping banks to not only ease the
choke points in their transactions but also provide insights into fraud
A new white paper GridGain Systems
provides a good introduction and synthesis of
various roles of in-memory computing in accelerating financial fraud detection
and prevention (pdf) which includes many named bank examples.
paper describes how in memory computing provides the low latency data sharing
backbone which is needed to enable pattern detection for fradulent activity to
be assessed in real-time while at the same time enabling genuine transactions
to proceed quicky.
Among other things, the paper says...
move from disk to memory is a key factor in improving performance. However,
simply moving to memory is not sufficient to guarantee the extremely high memory
processing speeds needed at the enterprise level... Clients who have implemented
the GridGain In-Memory Data Fabric to detect and prevent fraud in their
transactions have found that they can process those transactions about 1,000
times faster." ...read
the article (pdf)
NVMdurance names new CMO
Editor:- January 9, 2017 -
today announced that
has been appointed as Chief Marketing Officer.
Kevin, who had previously been Director of Marketing at
Micron, said - "We
are at the tip of the iceberg of the flash disruption in storage, and NVMdurance
is providing unique solutions to enable this."
BCC predicts $850 million market for carbon based NRAM in 2023
January 9, 2017 - BCC Research
a report -
NRAM Creating Market Volatility?
- which among other things - predicts the size of the NRAM market
based on technology developed by Nantero.
the preamble BCC says...
"Can you give us a small peek at why
NRAM will hold the advantage vs. Flash, SRAM and DRAM in the coming years? -
The key word is breakthrough. With NRAM we depart the world of silicon and
embrace cell phones, laptops and even an internet, that is increasingly going to
become carbon based organisms. Smaller components that work faster but require
less energy are absolute winners."
flash and alt nvms
SSD article in Military and Aerospace Electronics
January 4, 2017 - If you're interested in
military SSDs and
SSD companies then a recent article -
and solid-state media driving data storage in the December edition of Military
and Aerospace Electronics includes comments from various companies in
the market. Among other things the article says...
all aerospace and defense data storage for deployed applications have moved to
solid state memory." ...read
Value Propositions for buying SSDs (2005)
Persistent Memory Summit - what's coming?
January 3, 2017 - This is usually the time of year we'd be looking out for
announcements from the Storage
Visions conference - which for 15 years had been co-sited with
CES. That link has now been cut with
Storage Visions 2017 now taking place in October.
makes sense when you consider that most of the innovations in the storage market
have been coming from the cloud and enterprise markets rather then the consumer
If you're looking for a flagship event in January - consider
instead the 2017 Persistent Memory
Summit - (January 18, 2017, San Jose, CA).
Among the topics as
you might expect from any standards
org event (this event being run by
SNIA) will be a
presentation called - Rethinking Benchmarks for Non-Volatile Memory Storage
The need for a "goodness" or "aptness"
standard for non DRAM based
memory systems and components is something I discussed in an earlier news blog - is it realistic to talk
about memory IOPS? (August 2016).
the root of such a new standard will be how do you get agreements on latency
zoning for different zones of temporary data?
I think a key factor
on the usability of big memory emulation (even in a flattened latency world
where all storage is solid state) is that different parts of the memory contents
change more often than others. So even after you've figured out the best ways
to cache and tier the memory systems - the experience of memory is still very
application and infrastructure dependent.
My guess is that if new
standards for memory systems benchmarks do emerge then they will follow the same
patterns of abuse that we've seen in the past with
MFLOPS, SPECint/fp/marks and
that will give the tech writers something to write about for years.
|What happened before?
- the next box|
|Editor:- January 24, 2017 - Throughout the
the data storage market we've always expected the capacity of enterprise user
memory systems to be much smaller than the capacity of all the other attached
storage in the same data processing environment. |
new blog on the home page of StorageSearch.com
adapted memory systems - asks (among other things) if this will always be
Like many of you - I've been thinking a lot about the
evolution of memory technologies and data architectures in the past year. I
wasn't sure when would be the best time to share my thoughts about this one.
But the timing seems right now. ...read the
|"...flash has been
doing a great job for almost a decade now. Piece by piece it's been whittling
away high end hard drive sales. This has had the domino effect of reducing both
R&D spend by HDD manufacturers as well as investment in production
facilities. It is generally accepted that 2017 will be the last year 15K RPM
drives are produced, and that 2018 will probably see the last of the 10K RPM
|Says Trevor Pott in his blog
Looming Storage Crisis - on VirtualizationReview.com
(January 31, 2017). |
Trevor has a column
Cranky Admin where in another recent blog -
the RAM Bottleneck he discusses software methods - supported in
hypervisors - which can reduce the amount of physical RAM needed.
|RAM has changed from being
tied to a physical component to being a virtualized systems software idea - and
the concept of RAM even stretches to a multi-cabinet memory fabric. |
RAM really? - RAM in an SSD context|
|I think it's not too strong
to say that the enterprise PCIe SSD market (as we once knew it) has exploded and
fragmented into many different directions.|
|what's changed in enterprise