| leading the way to the
new storage frontier||...|
| top SSD companies
why I tire of "Tier
long goodbye to the hard drive
a winter's tale
of SSD market influences
changes in the forever war
popular SSD articles
hold up times in 2.5" military SSDs
where are we
heading with memory intensive systems?
|Getting acquainted with the
needs of new big data apps |
|Editor:- February 13, 2017, 2017 - The nature of
demands on storage and big memory systems has been changing.|
new storage applications by Nisha Talagala,
VP Engineering at Parallel
Machines provides a strategic overview of the raw characteristics of
dataflows which occur in new apps which involve advanced analytics,
machine learning and deep learning.
It describes how these new
trends differ to legacy enterprise storage patterns and discusses the
convergence of RDBMS and analytics towards continuous streams of enquiries.
And it shows why and where such new demands can only be satisfied by large
capacity persistent memory systems.
|Among the many interesting observations:-
- Quality of service is different in the new apps.
is rare. Instead the data access patterns are heavily patterned and initiated by
operations in some sort of array or matrix.
concludes "Opportunities exist to significantly improve storage and memory
for these use cases by understanding and exploiting their priorities and
non-priorities for data." ...read
- Correctness is hard to measure.
And determinism and repeatability
is not always present for streaming data. Because for example micro batch
processing can produce different results depending on arrival time versus event
time. (Computing the right answer too late is the wrong answer.)
SSD software news
where are we
heading with memory intensive systems?
|Xitore envisions NVDIMM
tiered memory evolution |
|Editor:- February 7, 2017, 2017 - "Cache
based NVDIMM architectures will be the predominant interface overtaking NVMe
within the next 5-10 years in the race for performance" - is the concluding
message of a recent presentation by Doug Fink , Director of
Product Marketing - Xitore
Generation Persistent Memory Evolution - beyond the NVDIMM-N (pdf)|
Among other things Doug's slides echo a theme discussed
before - which is that
new memory media (PCM, ReRAM, 3DXpoint) will have to compete in price and
performance terms with flash based alternatives and this will slow down the
adoption of the alt nvms.
comments:- Xitore (like others in the
SCM DIMM wars
market) is working on NVDIMM form factor based solutions and in this and
they provide a useful summary of the classifications in this module
However, the wider market picture is that the retiring and
retiering DRAM story cuts
across form factors with many other permutations of feasible implementation
So - whereas the NVDIMM is a seductively convenient form
factor for systems architects to think around - the competitive market for big
memory will use anything from SSDs on a chip upto (and including) populations of
entire fast rackmount SSD boxes as part of such tiered solutions - if the
economics, scale, interface fabric and
software make the
cost, performance and time to market sums emerge in a viable zone of business
risk and doability.
ain't what it used to be
|"We are morphing from
a storage hierarchy to a memory hierarchy. This is why I choose to work where I
do. Memory rules."|
Peglar, Senior VP & CTO, Symbolic IO in a
LinkedIn (February 2, 2017).|
competing semiconductor approaches compared |
|Editor:- January 10, 2017 - In a new video
Storage Class Memory -
Reality, Opportunity, and Competition -Sang-Yun Lee,
CEO - BeSang presents his analysis
of the technology SWOT state of the market.|
other things Sang-Yun Lee (whose company offers 3D super-NOR as an alternative
competing SSD and SCM technology platform) notes the weaknesses of some
- when looking at cross-point structure memories (such as
Micron's 3DXpoint) - "is
the worst nightmare for manufacturing"
Editor's comments:- In
some significant areas I disagree with the finality of some of of Mr. Lee's
- when looking at NVDIMM-P (such as
Diablo's Memory 1) - "performance
is not predictable at all times"
For example I think that changes in system aware
software can improve the usability of nand flash as DRAM. This is because I
think the applications experience leans more heavily towards the elastic
behavior of the entire "virtual" memory system as a working model
rather than transmitting every bump in the road from the native physical memory
(even when that memory is DRAM) for reasons discussed
Also when it comes to concern about the
nand flash when used in DRAM emulation - I am satisfied that due to the
variability of DRAM data churn (which follows a time and fractional data change
pattern in real applications - rather than all DRAM contents being equally
turbulent) and provided that the emulated data capacity is big enough (and
supported by a suitably sized
RAM cache ratio)
then I think that flash endurance is good enough for reasons discussed
the other hand - due to the fact that SCM applications currently require a
great deal of characterization and testing and may require proprietary tuning -
and due to the
nature of risk / reward attitudes in the enterprise user base (which we
already see when it comes to memory systems) I expect fragmentation will
occur in SCM adoption.
On the one hand there will be those who are
satisfied with the risks posed by software enhanced DRAM emulation (because
they have the technical resources to assess the risks and have applications
which match the software supported by early SCM solutions).
the other hand there will be many who prefer to wait to get solutions which
rely more on native hardware and rely less on the magic promised by new
software data architectures.
When memory technologies change then
systems designers have to invest learning time to understand the implications of
competing offers. And whatever your background Sang-Yun Lee's presentation
will get you thinking about many important comparative technology issues
...see the video
also:- the SSD heresies
|A3CUBE and memory
|Editor:- January 10, 2017 - When A3CUBE started talking
about supporting big memory fabrics with PCIe (in
there weren't too many
choices out there. |
Now in 2017 the
SSD and SCM news pages are
awash with announcements about big memory systems. And growing industry support
for NVMe over Fabric was one of the big market developments in
already seeing signs of clear fragmentation in the memory fabric market
(mostly via server based interface expansion preferences such as
GbE but some of the memory
applications are also being cannibalized by tiered memory, new semiconductor
memory solutions and DIMM wars.)
In this context it was interesting to
see a recent video
(January 2017) from A3CUBE which shows how their PCIe connected shared memory
fabric can work with NVMe components too. ...see the video
|"IoT storage must be
distributed. You can't think about a single storage device but, on the
contrary, a multitude of devices with a small amount of storage can easily be
part of a large distributed storage system.
It's a compelling idea but this approach has its challenges. Thousands
of nodes for just hundreds of terabytes of storage?
It means massive
scalability, a lot of node rebalancing when a node disappears, complex node
discovery and management that could impact performance."
|Interesting ideas from the blog -
ready for the post cloud era - by OpenIO.
(January 10, 2017)|
|controllernomics - joins the memory latency
to do list|
Editor:- February 20, 2017 - As predicted 8 years ago -
the widespread adoption of SSDs signed the death warrant for hardware
of hand tricks which seemed impressive enough to make hard drive arrays (RAID) seem fast in the
1980s - when viewed in slow motion from an impatient SSD perspective - were
just too inelegant and painfully slow to be of much use in true
The confidence of "SSDs everywhere"
means that the data processing market is marching swiftly on - without much
pause for reflection - towards memory centric technologies. And many old
ideas which seemed to make sense in 1990s architecture are failing new tests
of questioning sanity.
For example - is
DRAM the fastest main
- not when the capacity needed doesn't fit into a small enough space.
the first solutions of "flash as RAM" appeared in
PCIe SSDs many years
ago - their scope of interest was software compatibility. Now we have solutions
appearing in DIMMS in the memory channel.
This is a context where
software compatibility and memory latency aren't the only concerns. It's
understanding the interference effects of all those other pesky controllers in
the memory space.
That was one of the interesting things which emerged
in a recent conversation I had with Diablo Technologies
about their Memory1. See what I learned in the blog -
and user risk reward with big memory "flash as RAM"
is Toshiba salami slicing its memory heirloom?
February 14, 2017 - Toshiba
has today, again, topped mainstream media tech headlines due to the resignation
of the company's chairman. In recent weeks - were I so inclined (to fan the
flames of rumor) - I could have inserted
about the sale of Toshiba's semiconductor business in this news page every
Instead I came to the conclusion that the real story was that
there probably wouldn't be a single big story because the sale of the entire
memory systems business to a single buyer (most likely another memory or SSD
company) would inevitably introduce a delay due to antitrust hurdles. And
Toshiba needs financial bandaids now.
Therefore, what we've been
seeing is a fragmented approach - which on linkedin I described as "salami
slicing" the memory business heirloom. Western Digital got some - but no
one will get it all quickly due to caution about the impact of
SanDisk announces the arrival of flight 2.5 NVMe
February 10, 2017 - SanDisk
recently recycled the "Skyhawk" SSD brand - which had previously
been associated with a
product (launched in
from Skyera -
another SSD company - like SanDisk - which was
Western Digital and by coincidence whose founder's new company emerged from
stealth this week. (See the story about Tachyum after this.)
is aimed at the 2.5"
NVMe PCIe SSD market.
SSD brand names can
be important the significant thing about SanDisk's new Skyhawk is that it
fixes a longstanding strategic weakness in its enterprise PCIe SSD product
line which I commented on in
(when WD announced it would acquire SanDisk).
The irony is that
created the enterprise
PCIe SSD market and by whose acquisition SanDisk hoped in
June 2014 to
broaden its flash presence in the enterprise market) had been one of the
earliest companies to demonstrate a prototype 2.5" PCIe SSD (in
May 2012). But
Fusion didn't productize that concept and chose instead to move upscale in form
factor to boxes.
Decoupling from the complex legacy of the past is
why it has taken nearly 5 years for SanDisk to
its me too Skyhawk 2.5" NVMe SSD now.
our impact could be 100x SandForce - says cofounder of Tachyum
February 8, 2017 - Tachyum emerged
from stealth mode today
announcing its "mission to
conquer the performance plateau in nanometer-class chips and the systems they
Tachyum (named for the Greek
"tachy," meaning speed, combined with "-um," indicating an
element) was cofounded by Dr. Radoslav "Rado"
Danilak, who has invented more than 100 patents and spent more than
25 years designing state-of-the-art processing systems and delivering
significant products to market.
Among other things Rado founded or
cofounded 2 significant companies in
- an ultra efficient petabyte
scale rackmount SSD company
2014 - and
designed the most widely deployed
SandForce was acquired by LSI
for $322 million in 2011
and in 2014
LSI's SSD business was acquired by
past work in processor applications include:- at Wave Computing where he
architected the 10GHz Processing Element of their deep learning DPU.
the technology void and market gap which Tachyum will focus on Rado said -
"We have entered a post-Moore's Law era where performance hit a plateau,
cost reduction slowed dramatically, and process node shrinks and CPU release
cycles are getting longer. An innovative new approach, from first principles is
the only realistic chance we have of achieving performance improvements to rival
those that powered the tech industry of past decades, and the opportunity is a
hundred times greater than any venture I've been involved in."
comments:- on linkedin I said "I don't know any details but with so
many physics rooted data agility problems still needing to be solved anything
that Rado Danilak does will be worthy of our future attention."
replied - "Like always you are right on target. In fact Tachyum is 100x of
SandForce opportunity and impact."
Memory1 beats DRAM in big data multi box analytics
February 7, 2017 - The tangible benefits of using flash as RAM in the DIMM form
factor are illustrated in a new benchmark
Spark Graph Performance with Memory1 (pdf) - published today by Inspur Systems (the largest server
manufacturer in China) in collaboration with Diablo Technologies.
The memory intensive tests were run on the same cluster of five
servers (Inspur NF5180M4, two Intel Xeon CPU E5-2683 v3 processors, 28 cores
each, 256GB DRAM, 1TB NVME drive).
The servers were first configured
to use only the installed DRAM to process multiple datasets. Next, the cluster
was set up to run the tests on the same datasets with 2TB of Memory1 per server.
The k-core algorithm (which is typically used to analyze large
amounts of data to detect cross-connectivity patterns and relationships) was
run in an Apache Spark environment to analyze three graph datasets of
varying sizes upto a 516GB set of 300 million vertices with 30 billion edges.
times for the smallest sets were comparable. However, the medium-sized sets
using Memory1 completed twice as fast as the traditional DRAM configuration (156
minutes versus 306 minutes). On the large sets, the Memory1 servers completed
the job in 290 minutes, while the DRAM servers were unable to complete due to
lack of memory space.
Editor's comments:- As has been noted in
previously published research by others - being able to have more RAM emulation
flash memory in a single server box can (in big data computing) give similar
or better results than implementing the server set with more processors and more
DRAM in more boxes.
This is due to the traffic controller and fabric
latencies between server boxes which can negate most of the intrinsic
benefits of the faster raw memory chips - if they are physically located in
The key takeaway message from this benchmark is that a
single Memory1 enhanced server can perform the same workload as 2 to 3 non
NVDIMM enhanced servers when the size of the working data set is the
More useful however (as you will always find an
ideal benchmark which is a good fit to the hardware) is that the Memory1 system
places lower (3x lower) caching demands on the next level up in the
storage system (in this case the attached NVMe SSDs). This provides a higher
headroom of scalability before the SSDs themselves become the next critical
about Memory1 enhanced servers Inspur give another example of the advantages of
this approach - quoting a 3 to 1 reduction in server footprint and faster
job completion for a 500GB SORT.
the road to DIMM
you ready to rethink RAM?
who's well regarded in networked storage?
February 1, 2017, 2017 - IT Brand Pulse
the results of its recent survey covering brand perceptions in the networked
Among other things:- "By nearly a 2-to-1 margin,
Seagate, outperformed second-place challenger (Western Digital) to capture its
5th Market Leader award for Enterprise HDDs.)" ...read
in the SSD Market, Storage
Editor:- January 30, 2017 - A
on LAW360 provides an interim
update on the bankruptcy auction for Violin Memory and says
Violin's assets were valued "at least $14.5 million".
comments:- This is a humbling end for a company whose CEO said
6 years ago
that he hoped to build a billion dollar company.
You can read more
about Violin's past and the highs and lows and what was said at the time in
history sections of Violin's
company profile page here in StorageSearch.com.
Toshiba's semiconductor business rescues company
January 25, 2017 - Toshiba
was featured in the mainstream news media last week due to losses by its US
nuclear power business which had been responsible for halving the value of
shares in the company.
that Toshiba's financial fix centered around selling part of its
semiconductor business to Western Digital got
a good reaction from markets.
Datrium celebrates one year of NVMe flash difference in its
open converged platform
Editor:- January 24, 2017 - A recent
release from Datrium -
celebrating one year of supporting NVMe SSDs within its high availability open
convergence server storage (software)
(pdf) - discusses
which are inherent in legacy rooted storage architectures in AFAs which
are implemented by SAS or SATA SSDs in comparison to native NVMe SSDs.
benefit of NVMe drives - blistering performance - is unavailable on most
storage arrays today for two reasons. First, an array or hyperconverged design
cycle can only adopt new drive connectivity approaches at a certain rate. As a
rigid, composed system, it takes time. Second, successful flash array vendors
depend on data reduction to optimize pricing. This means the controller CPU
must filter data inline, which adds delay. The benefits of NVMe are
subsequently small because the benefits over SAS links are bottlenecked by CPU
Editor's comments:- The message of
the company seems to be that whereas modern flash storage systems undeniably
have done a great job at reducing infrastructure costs (compared to old style
HDD systems) there is still much more performance and utilization which can be
extracted from COTS servers and SSDs when they're working in a modern
architecture with modern software. See their
2 minute video for the key claimed
The extent of this next level up in performance,
utilization and efficiency (as an industry aspiration) was part of what I was
hinting at in my 2013 article -
meet Ken - and the
enterprise SSD software event horizon.
Primary Data gets ready to expand sales
January 20, 2017 - Primary
that Robert Wilson
has joined the company as the company's new Head of Sales. He previously had
VP level sales roles for
Wilson said - "With flash and cloud storage now common,
and only so much innovation ahead in appliances, many in the storage industry
are wondering what technology breakthrough is coming next."
NVMe now, NVDIMM coming - says Web-Feet Research
January 16, 2017 - Web-Feet
it has released the 12th annual edition of its SSD market reports - SSD
Markets and Applications 2017 ($5,500)
- which concentrates on the enterprise market with the emergence of PCIe, NVMe
M.2) and NVDIMM SSDs as well
as quantifying the client and commercial markets. It addresses the difficulty
of migrating from HDD
to SSD and to Hybrid
and All Flash
storage systems while advancing the 'intelligent processing' of memory and
, CEO - Web-Feet Research says (among other things)
the adoption of new interfaces like NVMe and the
and embracing emerging media like 3D NAND and Persistent Memory (ReRAM / XPoint)
the industry is undergoing a transformation. The old computing storage model is
unable to keep up with the amount of data needed to be stored. It needs to merge
storage into memory in order to process in real time more complex data, diverse
data types and much higher volume of data anticipated through 2021.
"Even though NAND-Flash based SSDs perform at several orders of
magnitude higher than hard-disk-drives
they suffer from the same non-deterministic inadequacies as compared to
solutions based on XPoint memory. Flash memory based SSDs suffer the
indeterminate delay inserted by the
Translation Layer in its
production of the of
physical block addresses. This is one reason that All-Flash-Array architects
like Pure Storage
desire additional physical control at the FTL. Additional storage system
functions such as deduplication, compression, error coding, power-on fill, data
recovery ops, check pointing, and scrubbing are further accelerated by this
Crossbar samples 8Mb ReRAM
Editor:- January 12, 2017
- A report in EE Times Europe
ReRAM in production at SMIC - says that Crossbar is sampling
8Mb ReRAM (its byte writable alt nvm) with R/W latency about 20nS and 12nS
respectively and endurance north of 100K cycles.
The 8Mb chips use
40nm CMOS processing and the company plans to offer its nvm IP as cores which
can be integrated in SoCs so as to make best use of the low latency.
told EE Times Europe that the early customers would be characterizing the new
memory and assessing its reliability. This is an important hurdle for any
new memory technology to cross before designers can have the confidence to
integrate them into commercial products. ...read
NVDIMM market report
Editor:- January 11, 2017 - The
NVDIMM market is estimated to grow at 64% CAGR over the course of 2016 to
2020 according to 9Dimen
Research who recently published a report
Global NVDIMM Industry
2016, Trends and Forecast Report ($2,850, 153 pages).
also:- who's who
in storage market research?
Another $75 million to support Kaminario's business outlook
January 10, 2017 - Kaminario
it has secured $75 million in a new round of financing, bringing the company's
total funding to $218 million.
company which is privately owned and doesn't disclose revenue says "Hundreds
of customers rely on Kaminario K2 to power their mission critical applications
and safeguard their digital ecosystem."
Kaminario has changed the internal make up of its flash drives (form
factors, interfaces and components) in its arrays many times and has said in
the past that its systems are based on an SDS model.
Today's news of
continuing investment in the company seems to be a bet that whatever the
enterprise memory systems market of the future might look like any
vendor which can grow its sales through multiple transitions of raw technology
uncertainty is valued.
June 2012 -
when writing about Kaminario's long range philosophy about the SSD market I said
they were a rare example of a systems company which had good roadmap symmetry
(having an architecture and software which was not closely tied to the
advantages of any particular memory type or SSD form factor - but which could
plausibly leverage future market improvements in SSDs with smaller bumps than
vendors who had over optimized their systems to leverage transient technology
This is as much about choosing the
applications and segments as designing the business plan. Because some
customer segments are so price and performance sensitive that only well
adapted memory systems can compete and sell in such applications.
always comes back to
in the end.
Foremay fires patent warning post about flash data destruct
Editor:- January 10, 2017 - If you're seriously interested
in data security in SSDs you'll already know that encryption is simply a
promise to delay access to secured data rather than a guarantee that it will
remain denied to those who shouldn't see it. That's why the
SSD fast purge /
autonomous data destruct / fast secure erase market has developed so many
ingenious ways to offer better security assurance - which you can pick to
match your deployment's time to erase, electrical power to erase and
monetary cost budget.
post on linkedin by Dennis
Eodice VP Strategic Sales - Foremay - who says
the company has a patent -
- for a technique which physically destroys the nand flash in an SSD using
addressibly directed high voltage.
The implied message being that if
any other companies have used similar techniques to secure SSDs which are
sold in other regions - Foremay thinks this patent is enforceable to prevent
this technique being used in competing SSDs sold in the US.
How do banks use big memory systems to detect fraud?
January 9, 2017 - In the early 2000s I started hearing stories from vendors
of ultrafast SSDs
about how their fast memory systems were helping banks to not only ease the
choke points in their transactions but also provide insights into fraud
A new white paper GridGain Systems
provides a good introduction and synthesis of
various roles of in-memory computing in accelerating financial fraud detection
and prevention (pdf) which includes many named bank examples.
paper describes how in memory computing provides the low latency data sharing
backbone which is needed to enable pattern detection for fradulent activity to
be assessed in real-time while at the same time enabling genuine transactions
to proceed quicky.
Among other things, the paper says...
move from disk to memory is a key factor in improving performance. However,
simply moving to memory is not sufficient to guarantee the extremely high memory
processing speeds needed at the enterprise level... Clients who have implemented
the GridGain In-Memory Data Fabric to detect and prevent fraud in their
transactions have found that they can process those transactions about 1,000
times faster." ...read
the article (pdf)
NVMdurance names new CMO
Editor:- January 9, 2017 -
today announced that
has been appointed as Chief Marketing Officer.
Kevin, who had previously been Director of Marketing at
Micron, said - "We
are at the tip of the iceberg of the flash disruption in storage, and NVMdurance
is providing unique solutions to enable this."
BCC predicts $850 million market for carbon based NRAM in 2023
January 9, 2017 - BCC Research
a report -
NRAM Creating Market Volatility?
- which among other things - predicts the size of the NRAM market
based on technology developed by Nantero.
the preamble BCC says...
"Can you give us a small peek at why
NRAM will hold the advantage vs. Flash, SRAM and DRAM in the coming years? -
The key word is breakthrough. With NRAM we depart the world of silicon and
embrace cell phones, laptops and even an internet, that is increasingly going to
become carbon based organisms. Smaller components that work faster but require
less energy are absolute winners."
flash and alt nvms
SSD article in Military and Aerospace Electronics
January 4, 2017 - If you're interested in
military SSDs and
SSD companies then a recent article -
and solid-state media driving data storage in the December edition of Military
and Aerospace Electronics includes comments from various companies in
the market. Among other things the article says...
all aerospace and defense data storage for deployed applications have moved to
solid state memory." ...read
Value Propositions for buying SSDs (2005)
Persistent Memory Summit - what's coming?
January 3, 2017 - This is usually the time of year we'd be looking out for
announcements from the Storage
Visions conference - which for 15 years had been co-sited with
CES. That link has now been cut with
Storage Visions 2017 now taking place in October.
makes sense when you consider that most of the innovations in the storage market
have been coming from the cloud and enterprise markets rather then the consumer
If you're looking for a flagship event in January - consider
instead the 2017 Persistent Memory
Summit - (January 18, 2017, San Jose, CA).
Among the topics as
you might expect from any standards
org event (this event being run by
SNIA) will be a
presentation called - Rethinking Benchmarks for Non-Volatile Memory Storage
The need for a "goodness" or "aptness"
standard for non DRAM based
memory systems and components is something I discussed in an earlier news blog - is it realistic to talk
about memory IOPS? (August 2016).
the root of such a new standard will be how do you get agreements on latency
zoning for different zones of temporary data?
I think a key factor
on the usability of big memory emulation (even in a flattened latency world
where all storage is solid state) is that different parts of the memory contents
change more often than others. So even after you've figured out the best ways
to cache and tier the memory systems - the experience of memory is still very
application and infrastructure dependent.
My guess is that if new
standards for memory systems benchmarks do emerge then they will follow the same
patterns of abuse that we've seen in the past with
MFLOPS, SPECint/fp/marks and
that will give the tech writers something to write about for years.
|What happened before?
- the next box|
|Editor:- January 24, 2017 - Throughout the
the data storage market we've always expected the capacity of enterprise user
memory systems to be much smaller than the capacity of all the other attached
storage in the same data processing environment. |
new blog on the home page of StorageSearch.com
adapted memory systems - asks (among other things) if this will always be
Like many of you - I've been thinking a lot about the
evolution of memory technologies and data architectures in the past year. I
wasn't sure when would be the best time to share my thoughts about this one.
But the timing seems right now. ...read the
|"...flash has been
doing a great job for almost a decade now. Piece by piece it's been whittling
away high end hard drive sales. This has had the domino effect of reducing both
R&D spend by HDD manufacturers as well as investment in production
facilities. It is generally accepted that 2017 will be the last year 15K RPM
drives are produced, and that 2018 will probably see the last of the 10K RPM
|Says Trevor Pott in his blog
Looming Storage Crisis - on VirtualizationReview.com
(January 31, 2017). |
Trevor has a column
Cranky Admin where in another recent blog -
the RAM Bottleneck he discusses software methods - supported in
hypervisors - which can reduce the amount of physical RAM needed.
|RAM has changed from being
tied to a physical component to being a virtualized systems software idea - and
the concept of RAM even stretches to a multi-cabinet memory fabric. |
RAM really? - RAM in an SSD context|
|I think it's not too strong
to say that the enterprise PCIe SSD market (as we once knew it) has exploded and
fragmented into many different directions.|
|what's changed in enterprise