| leading the way to the
new storage frontier||...|
|after AFAs -
a winter's tale
of SSD market influences
changes in the forever war
hold up times in 2.5" military SSDs
where are we
heading with memory intensive systems?
CPUs for use in SSDs in the Post Modernist Era
when flash is faster
than DRAM - the Memory1 benchmarks
Rambus to coach faster DRAM
|Editor:- April 20, 2017 - Back in the early 1990s
it was not uncommon to hear about specialist server companies which were using
peltier effect heat sinks to refrigerate the fastest workstation processors so
that they could be run at higher clock speeds. But this kind of extreme
approach to server acceleration only provided short term competitive gains in a
single dimension. |
One of the biggest bottlenecks in the past decade
has been RAM architecture and DRAM implementation itself. (You can read more
articles about the background to this on the
DRAM resource page here on
A new angle on extending the performance of DRAM
recently by Rambus
and Microsoft who are collaborating on the design of prototype super cooled
DRAM systems to explore avenues of improvement in latency and density due to
physics effects below -180 C.
A new article -
Heat Up With Cold DRAM - by Junko Yoshida
, Chief International Correspondent - EE Times - discusses these plan in
In the article - Craig Hampel,
chief scientist at Rambus, told EE Times that "Microsoft isnt alone...
heavy data center users like Google, Facebook and Amazon are all in search of
new memory architecture. Indeed, these tech giants who have primarily grown
their business via their technological prowess in software development are now
finding the future of their business growth severely constrained by hardware
advancements." ...read the article
comments:- At room temperature the main problem in fast DRAM systems is
that the energy required for refresh cooks the chips which means cells lose
charge faster which creates data integrity risks which in turn needs more
This is a limiting design factor.
means that even if you have a miraculous packaging technique which can sandwich
more chips into a box - DRAM loses out to nvm technologies which don't
require refresh - when the scale of the installed capacity (and watts) in the
box is high.
Because if you can't fit enough RAM into the same single
box then the memory system accrues a box-hopping fabric-latency penalty which
outweighs the benefits of the faster raw memory chip access times inside the
If you freeze DRAM then the refresh cycle can be
extended (which means you can pack more capacity in a box) but also the native
transit time for data in the copper interconnects and inside the silicon gets
Although Rambus and Microsoft are pitching this a
progressive research exercise I don't think that it will provide a general
solution for data intensive factories.
While it's a good thing for
researchers to play around and explore the limits of what can be done with all
kinds of memory devices - I think that the answer to greater performance lies in
new architectures rather than freezing old ones.
there yet? |
|Editor:- April 7, 2017 - After more than 20 years
of writing guides to the SSD and memory systems market I admit in a new blog on
we there yet? - that when I come to think about it candidly the SSD
industry and my publishing output are both still very much "under
Fabrics - market experiences|
|Editor:- March 31, 2017 - The state of the NVMe
SSD and fabric market and its growth expectations are conveniently summarized in
a new presentation -
with NVMe over Fabrics (pdf) - by Mellanox. Among other
- 40% of AFAs will be NVMe based by 2020
of having a
based SSD fabric which can be accessed by many servers and which combines
the latency advantages of local
PCIe SSDs with the
essential hooks from past low latency server interconnects - specifically -
- has been many years in the telling.
- shipments of NVMe SSDs will grow to 25+ million by 2020
There have been 3 main
ingredients to this market brew:-
- something worthwhile sharing as a resource (low latency SSD pools)
- a convenient way of connecting to them (a large installed base of server
PCIe interface chips were
the essential starting point - but it took many years for industry standards to
paper captures current expectations for how the market is expected to grow.
the article (pdf)
- software support
- which ranges from the storage stack to multi-vendor fabric support.
|Web-Feet Reports on
NVM Market Shares in 2016|
|by Alan Niebel
, CEO - Web-Feet
Research - March 29, 2017 |
|Not since 2000, have the memory suppliers
been in an undersupply situation. It is the force which in 2H 2016 resulted
in increasing memory prices for a number of reasons. |
vendors are producing 2D (planar) NAND at full capacity, while concurrently
making the costly shift in production to 3D NAND.
DRAM has been so hot
that the Big Three (Samsung, SK Hynix, and Micron) are shrinking their
lithography below 1x and 1y, while maintaining their production at capacity.
Other foundries, like the memory manufacturers, are also running at
capacity, trying to maintain balance with IoT, M-2-M, mobile and computing
Even the NOR Flash market saw a reversal of 15+ years of
market declines has been caught in the allocation/shortage scenario.
with all this current production building additional capacity takes time and has
been fraught with technology hurdles that slowdown bit increases.
Although the NOR market is around 5% of NAND, NORs challenges
represent a microcosm of the larger Flash and memory markets.
the stronger demand for SoC (System on Chip) to satisfy the IoT and edge
terminal requirements, foundries and ODMs are shifting their wafer mix away from
These SoCs are also being built at lithography
nodes below 40nm, where most embedded NOR Flash cannot be built. Consequently,
these SoCs will need KGD and standalone serial NOR components at 512Kbit-256Mbit
densities and larger to fulfill the IoT system memory requirements.
China, GigaDevice who supplies serial NOR is caught in a wafer allocation
squeeze in not getting enough serial Flash wafers from their foundry who is
making more SoCs (that need serial NOR).
Winbond who makes both DRAM
and NOR/NAND has been riding the higher price DRAM wave and has limited any
additional wafer allocations to NOR.
Micron has been rumored to have
shut down NOR production at their Singapore fab in favor of NAND, which removes
some NOR wafer capacity.
Cypress the NOR market leader is gradually
moving their emphasis from commodity standalone NOR to a IoT systems memory
module especially for the automotive market.
Finally, Macronix has
regained more NOR market presence in allocating more NOR wafer production is
still facing long lead times since demand is ever increasing on the constrained
The net effect is NOR and other memory prices are
increasing with supply constraints and vendors are on allocation for the
In consolidating its annual results of each Flash memory vendors
shipments, WebFeet Research found the 2016 Flash memory market to be $36.8
billion, an increase of 10% from 2015.
A substantial increase in 2016
revenues came from the NAND Flash market with a 10.7% growth rate, while the NOR
market contracted flatly from 2015 by only -1.8 %.
Samsung was the
perennial 2016 revenue market leader for all NV Memories and NAND, Cypress
(Spansion) established itself as the NOR Flash and the NVRAM market leader,
while Macronix regained the serial NOR leadership position.
The 2016 Non Volatile Memory Market Shares by Vendor report (by
discusses the impact of the mergers and acquisitions on the memory market,
qualifies the migration of planar to 3D NAND, quantifies how fast the emerging
NVM are growing including STT-MRAM and XPoint as well as the reemergence of RRAM
and NRAM, and presents two forecasts for serial EEPROM showing the impact (slow
initial adoption) of the Internet of Things (IoT) and its aggressive scenario.
This report, CS700MS-2017, is available for $2.5K and providers of
the market share data can obtain the report at a discount.
more info about these reports contact WebFeet Research at +1 831-869-8274 or
will invest about 10 trillion won (US$8.7 billion) in Hwaseong Campus in
Gyeonggi Province, South Korea to build a new line to produce DRAM." |
report in Business Korea (March 15, 2017)|
|M.2 PCIe SSDs for secure
|Editor:- March 20, 2017 - Do you know who makes
M.2 PCIe SSDs which can
temperatures and have security
strong enough for a military application?|
That's a question I was
asked recently by a reader in the
I looked into it. He was right. They are hard to find. Nearly all the
industrial M.2 SSDs are
SATA and not
only companies which I have been able to confirm in this category (by direct
contact rather than a promissory future product statement on a web page)
became interested in the technical difficulties which might explain why
there are so few suppliers right now.
Here's what I think is part of
As you add operational requirements to the datasheet
moving up from consumer to enterprise and then to industrial SSDs you also add
circuits and components which compete for physical space, electrical power and
cost in the total SSD
- use of larger
memory cell geometry (nanometer generation and coding type - for example
SLC rather than MLC, or MLC rather than TLC) to ensure data integrity over a
wider range of temperature and power supply quality environments
When you add all the requirements together to make an industrial /
military SSD capable of working reliably and shrink the size budget from a
bigger to smaller form factor (2.5" to M.2) while at the same time asking
for high performance too - it's a tough design problem to solve for the first
- use of different flash
Consumer and enterprise SSDs can use controllers
which use more electrical power than industrial or embedded SSDs due to the ease
of fitting the design into the heat rise budget.
can't afford the same wattage in their controllers - because the heat generated
would reduce the reliability
of the SSD at the high end of its operating temperature range (70 to 85
degress C and sometimes 95 degrees) - or force the use of more expensive
components elsewhere (to cope with the incremental heat rise.
tradeoffs made (typically lower wattage controller) is why industrial SSDs
tend not to use CPU intensive data integrity management schemes like
And that in turn means they need to use intrinsically higher quality memory.
But once such products do became available from multiple sources
then demand will grow (due to confidence in the equipment design community
that they won't get stuck in an
EOL rut from a single
If you know of other secure erase, industrial
operation M.2 PCIe SSD companies which are shipping products let me know and
I'll mention them here.
I placed a query via linkedin but that didn't
generated any other confirmed vendors.
|"In the most recent
(ending January 31, 2017) we had more than one customer running large scale
simulations and analytics replace over 20 racks (think 20 refrigerators of
equipment) with a single FlashBlade (at 4U about the size of a microwave oven).|
Such dramatic consolidation depends on storage software that has been designed
for silicon rather than mechanical disk."
|Scott Dietzen, CEO -
- in his blog
the data platform for the cloud era and the secular shift to flash memory
(March 1, 2017)|
Editor's comments:- this is another
confirmation of the replacement ratio predictions in my (2013) blog -
meet Ken - and the
enterprise SSD software event horizon.
PS - Another thing which
Scott Dietzen said in his new blog was...
"This year, the 8th
since our founding and our 6th of selling, we expect to reach $1 billion in
Mitigation for PCM and STT-RAM|
|Editor:- February 21, 2017 - There's a vast body
of knowledge about data integrity issues in
nand flash memories. The
and fixes have been one of the underpinnings of
SSD controller design.
But what about newer emerging nvms such as PCM and STT-RAM?|
know that memories are real when you can read hard data about what goes wrong -
because physics detests a perfect storage device.
A new paper -
a Survey of Soft-Error
Mitigation Techniques for Non-Volatile Memories (pdf) - by Sparsh Mittal,
Assistant Professor at Indian Institute of
Technology Hyderabad - describes the nature of soft error problems in
these new memory types and shows why system level architectures will be needed
to make them usable. Among other things:-
- scrubbing in MLC PCM would be required in almost every cycle to keep the
error rate at an acceptable level
- read disturbance errors are expected to become the most severe bottleneck
in STT-RAM scaling and performance
|He concludes:- "Given the energy
inefficiency of conventional memories and the reliability issues of NVMs, it is
likely that future systems will use a hybrid memory design to bring the best of
NVMs and conventional memories together. For example, in an SRAM-STT-RAM hybrid
cache, read-intensive blocks can be migrated to SRAM to avoid RDEs in STT-RAM,
and DRAM can be used as cache to reduce write operations to PCM memory for
avoiding WDEs. |
"However, since conventional memories also have
reliability issues, practical realization and adoption of these hybrid memory
designs are expected to be as challenging as those of NVM-based memory designs.
Overcoming these challenges will require concerted efforts from both academia
...read the article (pdf)
comments:- Reading this paper left me with the confidence that I was in
good hands with Sparsh Mittal's identification of the important things which
need to be known.
See also:- an earlier paper by Sparsh Mittal -
compression techniques in caches and main memory
|How does the memory famine impact
industrial SSD service values? |
Editor:- April 25, 2017 - 17
years ago I wrote a guide -
Why I won't publish your
press release? - the first of which still applies today - it's not
The context which brought it to mind was a
release earlier this month from Virtium
saying it was now offering 2TB (MLC) and 1TB (iMLC) in its 2.5" industrial
SATA SSD range.
That kind of capacity isn't news - as we reported the
first rugged TB 2.5" SSD sampling on this page more than 8 years ago -
in January 2009.
(And the first industrial SATA 2.5" flash SSDs were shipping 13 years ago
in July 2004
Is there anything else you can tell me? I asked.
me - readers need to have a genuine reason to be interested.)
about pricing - but the company said there are too many options and didn't want
to be drawn on that. I learned the SLC version is rated at 30
DWPD. But that's not a
news story either.
Around about this time I was thinking about the
impacts of the flash memory famine on different types of companies in the SSD
supply chain. (Whether it's a good thing or bad thing - depends which company
In the 1970s and 80s chip supply volatility due to
semiconductor company book to bill ratios thrashing about all all over the place
was a constant reality of being in the electronics business. In recent years
we had the illusion that the
bust days of memory were over - due to better business management and
better prediction systems. So when memory prices increased and supply dates went
into "unknown" that was a shock to many people.
of supply and pricing expectations are just as important for industrial
equipment makers as avoiding
EOL shock and having
Tactical stories about memory supply and
logistics had come up in some recent conversations I had with industrial SSD
Elektronik in Poland and
Longsys in China.
I asked this...
Does Virtium (which is headquartered in the USA) get
involved in giving price projection promises to its contract system customers?
Phillips, VP of marketing - Virtium told me ""While we can't
'promise' specific pricing into future quarters, we can provide supply-chain and
associated pricing visibility to strategic customers up to one to two quarters
out, based on NAND flash allocations, associated pricing and cost averaging of
This at least helps customers plan. In other words, even
though we are direct with suppliers, it does not completely insulate us from
cost increases should suppliers decide to raise prices, but we can somewhat
mitigate pricing volatility like what is found in the distribution channel
There you go. Is that more interesting than just
another 2.5" TB SSD story? I hope so.
now Cinderella industrial systems with "no-CPU"
budgets and light wattage footprints can go to the NVMe speed-dating ball
April 19, 2017 - A dilemma for designers of embedded systems which require high
SSD performance is how can you get the benefits of enterprise class NVMe SSDs
for simple applications - which integrate video for example - without at the
same time escalating the wattage footprint of the entire attached micro server?
new paper published today by IP-Maker -
server-class storage in embedded applications (pdf) discusses the problem
and how their new FPGA based IP enables any NVMe PCIe SSD to be used in
embedded systems to provide sub-microsecond latency using "20x better power
efficiency, and 20x lower cost compared to a CPU-based system."
company says the NVMe host IP - which is now available - can be used in an FPGA
connected between the PCIe root port and the cache memory, internal SRAM or
external DRAM. It fully controls the NVMe protocol by setting and managing the
NVMe commands. No CPU is required. It supports PCIe gen 3 x 8 interface.
Michael Guyard, Marketing
Director said that - among other things - applications include:-
- military recorders
- portable medical imaging
- mobile vision products - in robots and drones
Editor's comments:- Now Cinderella
embedded systems with low cost budgets and low wattage footprints can go to the
enterprise NVMe performance ball. The new magic - in the form of the FPGA IP
released today by IP Maker - has the potential to transform the demographics
and class of SSDs seen in future industrial systems.
CPUs for use with SSDs, SSD
low yield at sub 20nm is root of DDR4 shortage says DRAMeXchange
April 14, 2017 - Quality problems in
DRAMs which have been
sampling this year at the new sub 20nm generation from major suppliers is at
the heart of the issues discussed in a new -
view blog by DRAMeXchange - which concludes that the contract prrice
of 4GB DDR4 DRAM modules will rise 12.5% entering 2Q17.
research director of DRAMeXchange said - "PC-OEMs that have been
negotiating their second-quarter memory contracts initially expected the market
supply to expand because Samsung
and Micron have
begun to produce on the 18nm and the 17nm processes, respectively. However both
Samsung and Micron have encountered setbacks related to sampling and yield, so
the supply situation remains tight..." ...read the
inside SSD pricing,
storage market research
Tegile gets another $33 million funding
April 11, 2017 - Tegile
$33 million in additional funding which was led by Western Digital and
current investors such as Meritech Capital, Capricorn Investment Group, and
Cross Creek Capital. With this financing, Tegile has raised a total of $178
million to date.
See also:- rackmount SSDs,
hybrid storage arrays,
VCs in SSDs
Tachyum promises 10x faster CPUs soon
7, 2017 - I was fortunate enough to have had close relationships with
technologists and marketers of high end server CPUs in the 1990s who explained
to me in detail the peformance limitations of CPU clock speeds and memories
which would prevent CPUs getting much faster beyond the year 2000 due to
physics and the lost latency due to the coherency of signals when they left
silicon and hit copper pads.
That was one of the triggers which made
me reconsider the significance of the earlier CPU-SSD equivalence and
acceleration work I had stumbled across in my work in the late 1980s and write
about it in these pages when I explained (in 2003) why I thought the enterprise
SSD market (which at that time was worth only tens of millions of dollars) had
the potential to become a much bigger $10 billion market by looking at server
replacement costs and acceleration as the
proposition for market adoption and disregarding irrelevant concerns
about cost per gigabyte.
I was surprised these equivalencies weren't
more widely known. And that's why I recognized the significance of what the
pioneers of SSD accelerators on the SAN were doing in the early 2000s.
taken 17 years - but the clearest ever expression of the CPU GHz problem and why
server achitecture got stuck in that particular clock rut (for those of you
who don't have the semiconductor background) appears in a recent
release from Tachyum
which says (among other things)...
"The 10nm transistors in use
today are much faster than the wires that connect them. But virtually all major
processing chips were designed when just the opposite was true: transistors were
very slow compared to the wires that connected them. That design philosophy is
now baked into the industry and it is why PCs have been stuck at 3-4GHz for a
decade with "incremental model year improvements" becoming the norm.
Expecting processing chips designed for slow transistors and fast wires to still
be a competitive design when the wires are slow and the transistors are fast,
doesn't make sense."
The warm-up press release also says - "Tachyum
is set to deliver increases of more than 10x in processing performance
at fraction of the cost of any competing product. The company intends to release
a major announcement within the next month or two." ...read
Editor's comments:- Do I believe it's
Yes - by discarding 2D designs of CPUs and maybe adding
SSDera memory architecture in the CPU SoC. (I'm just guessing about these
solutions BTW.) But if anyone knows how - then I'm prepared to give cofounder
the benefit of the doubt for such ambitious claims.
Toshiba's storage sale - update from Tom Coughlin
April 4, 2017 - As previously reported Toshiba's memory and SSD
business will be spun off to generate cash to plug losses in its nuclear
generating business. A new article by Tom Coughlin,
President Coughlin Associates
Implications of Toshiba Memory Unit Sale - looks at the ramifications (no
pun intended) for the hard
drive market depending on which of the 10 or so potential bidders succeeds.
Among other things Tom says - "Selling the HDD unit along with
the flash unit could be one outcome... The end result could be very interesting
and create very strange bed-fellows." ...read
Walmart generates 2.5PB of analyzable data every hour
April 3, 2017 - Walmart's Data Café
is a private cloud which supports business decision makers in its 20,000
stores who can access over 200 streams of internal and external data, including
40 petabytes of recent transactional data, which can be modelled, manipulated
I learned the above stats in a new case study -
Data At Walmart: How 40+ Petabytes Improves Retail Decision-Making by
business author Bernard
Marr who tells us how teams from any part of the business are invited
to bring their problems to the analytics experts and then see a solution appear
before their eyes on the nerve centre's touch screen "smart boards".
Transcend's new M.2 SSD
Editor:- March 27, 2017 -
today launched the MTE850
- a family of PCIe Gen 3 x 4 M.2
SSDs with 512GB of 3D MLC NAND capacity and R/W speeds upto 2.5GB/s and
1.1GB/s, respectively aimed at the
CPUs for use with SSDs in the Post Modernist Era of SSD
March 22, 2017 - A new blog on StorageSearch.com
CPUs for use with SSDs in the Post Modernist Era of SSD and Memory Systems
- was prompted by a question from a startup which is designing new processors
for the SSD market. ...read the
Google joins investors in Avere Systems
March 21, 2017 - Avere
the closing of a $14 million Series E funding with participation from existing
investors Menlo Ventures, Norwest Venture Partners, Lightspeed Venture Partners,
Tenaya Capital and Western Digital Capital and new investor Google Inc.
new investment brings Avere's total funds raised to $97 million, and will be
used to expand the company's hybrid cloud product offerings so that more
organizations can easily take advantage of the public cloud.
comments:- A year
ago Avere announced it had been named "Google Cloud Platform
Technology Partner of the Year" for 2015.
VCs in SSDs
NVMdurance has US patent for Adaptive Flash Tuning
March 21, 2017 - NVMdurance
that it has been granted US patent 9,569,120 for Adaptive Flash Tuning.
patent covers NVMdurance's
Navigator software, which discover optimal flash trim sets for the target
application and implement a set of optimization techniques that constantly
monitor the NAND flash health and autonomically adjusts the operating parameters
in real time.
Before the flash memory product goes into production, NVMdurance
Pathfinder determines multiple sets of viable flash register values, using a
custom-built suite of machine-learning techniques. Then, running on the flash
memory controller utilized in SSDs or other storage product, NVMdurance
Navigator chooses which of these predetermined sets to use for each stage of
life to increase the flash memory endurance.
Editor's comments:- The things which make NVMdurance's
technology processes a viable business model for SSD partners are that the
heavyweight processing is done back at HQ as part of the memory characterization
and controller modeling which means that the delivery overhead in each shipped
product is lightweight and protects the stakeholder's IP.
thing is that no one has come up with any better ideas for a way to roll out a
new SSD with new flash memory encapsulated in such a predictable set of
algorithmically bounded phases which reduces the worst risks (of delay and
misfire) which come from picking such magic numbers via the organic talent
flash care management & DSP IP in SSDs,
the limericks of
the 5 stage
life cycle budget of extended flash endurance (pdf)
Intel is sampling 3DXpoint PCIe SSDs
19, 2017 - Intel
that it is sampling its long awaited first enterprise SSD which uses 3DXpoint
(Optane) memory and which is aimed at the HHHL
PCIe SSD market.
Series (pdf) has a PCIe 3.0 x 4 NVMe interface and provides upto 375GB
capacity, 500K mixed
block level R/W latency 150/200S (queue depth 16), and endurance of 30
DWPD for 3 years
(equivalent to 18 on a 5 year adjusted basis).
The new drives are
supported by caching / tiering software (Intel Memory Drive Technology) which
collaborates with motherboard DRAM resources to transparently provide an
emulated 3DX as RAM memory pool.
This is similar in concept to earlier
software products in the market from various vendors which have supported
flash as RAM.
As widely expected the new SSDs have worse
performance and higher pricing than Intel had indicated at the first public
unveiling in the summer of 2015.
A rounded perspective can be seen
in a new blog
Announces Optane SSDs for the Enterprise - by Jim Handy - founder Objective Analysis.
other things Jim says "Intel has announced an SSD whose performance is
close to that of NAND flash at a price that is close to that of DRAM. How did
that happen?" ...read
Editor's comments:- As the new P4800X is
not hot pluggable and as its main difference to previous flash SSDs from Intel
is its support as a tiered memory - the most obvious role for a competitive
comparison is memory channel based NVDIMM solutions - in particular the Memory1
product from Diablo
which provides 128GB of flash as RAM per DIMM socket - and upto 2TB in a 2
Density comparison - Optane PCIe and Flash DDR-4
a density level the current technology at a superficial level appears to be 1x
HHHL Optane PCIe SSD gives the equivalent emulated memory as 3x DDR-4 Memory1
Although this doesn't take into account how much DRAM is needed
to support each type of configuration it's a good enough comparison point to
But there are some areas of doubt in making such a
1 - Due to the scarcity of the new Optane products we
haven't seen published benchmarks yet which could show how effectively the
Optane system works (as an integrated memory and software solution). The raw
latency figures (at the SSD datasheet level) don't conclusively point towards
it being either a better or worse performer (than the flash based Memory1).
- From a customer point of view a key factor between the 2 products isn't just
the level of density today. (Let's assume they're both similar right now).
If we compare the potential capacity roadmaps - 3DXpoint (as a new
technology) has been difficult for Intel /
Micron to get started
and we can't have any confidence yet that density will improve (from the
technology difficulty angle) or indeed that new generations will even get the
to improve at all.
In contrast - all flash based solutions get a
helping hand from the entire flash industry's striving to keep improving
density. So it's less risky to assume that a flash based system will probably
increase in storage density during the next 3 years than any new alternative
It's the density in a single motherboard which makes
or breaks the big memory market (as we have already seen with
power consumption (wattage) is more important than small differences in
Because if you can get more memory into one box then you save
the interbox fabric latency which already makes nand flash (as RAM) faster
and cheaper than DRAM at scales of tens to hundreds of terabytes.
refreshing to see that there are so many genuinely different competing solutions
being offered for the future memory fabric market.
software (and useful agility with the new big memory
apps) along with rationally
propositions for integration with the
cloud will be just as
important as any of the raw memory technologies we see assembled into cards and
As I said at the
of 2016 - everything in the SSD market now affects everything else.
CNEX Labs has amassed $60 million for new SSD controller
March 15, 2017 - CNEX Labs
today announced its
Series C round of financing which brings total funding to date over $60
million. The company will use the funding for mass production and system
integration for lead customers of its NVMe-compliant SSD controllers for
hyperscale markets. The new controllers will enable full host control over
data placement, I/O scheduling, and other application-specific optimizations, in
both kernel and user space.
intelligence flow symmetry (1 of 11 Key Symmetries in SSD design).
BeSang says 3D Super-DRAM could fix multi-billion dollar money
pit of memory industry's fab capacity roadmap
15, 2017 - Just as we're starting to get used to a world view that memory
not be enough to make all the memory parts needed - and that a pragmatic
global optimization from the user point of view may be to plan ahead for
memory systems which use tiering, flash as RAM, freshly minted shiny
nvms and new
SSD aware software to get
more storage and processing
done with less
chips - a journey which - depending who you are - begins or ends with the
idea of reducing
the ratio of DRAM to storage - and just as we're getting our heads adjusted
to the huge investments which would be needed to make DRAM technology
better and to believe that no sane investor (not even a
VC who loves SSDs)
would want to toss their money in that direction - a seemingly different
alternate get out of jail free option is offered in a new blog by Sang-Yun Lee,
CEO - BeSang -
in EE Times -
Among other things Sang says...
you consider planar DRAM shrinking from 18nm to 16nm, then, 20% more
dice-per-wafer could be achieved. To do so, multi-billion dollar should be
invested for R&D and
is required. In case of 3D Super-DRAM, it needs less than $50 million for R&D
and no EUV; and even so, it could produce 400% more die-per-wafer."
at the risk of repeating some of that:- 4x as much DRAM from the same fabs
without huge investments... How is that possible? ...read
Editor's comments:- You can get an idea of
the complex decision matrices facing memory makers. In past decades the product
types which determined the demand mix for memories (PCs, phones, servers) were
few in number and had predictable roadmaps. Now big demands for memory are
coming from cloud, IoT and new intelligence based markets which are creating
entirely new ratios and rules of what is possible with memory systems.
new edition - the Top SSD Companies
10, 2017 - StorageSearch.com
today published the new
39th quarterly edition of the Top SSD Companies.
all made their first appearances in this list.
Although a lot has
changed in the past 10 years of tracking future SSD winners in this series the
next wave of dusruptive change in memory systems architecture has barely begun.
a new name in SSD fabric software
Editor:- March 8,
2017 - A new
SSD software company
- Excelero -
has emerged from stealth today.
Excelero which describes itself as
- "a disruptor in software-defined block storage"
version 1.1 of its NVMesh® Server SAN software "for exceptional Flash
performance for web and enterprise applications at any scale."
company was funded in part by Fusion-io's founder
Editor's comments:- An easy way to understand what this kind
of software can do for you is to see how Excelero created a petabyte-scale
shared NVMe pool for exploratory computing for an early customer - NASA/Ames.
The mitigation of latency and bandwidth penalties enabled by the new
environment enabled "compute nodes to access data anywhere within a data
set without worrying about locality" and helped to change the way that
researchers could interact with the data sets which previously had been
constrained in many small islands of low latency. ...read
the white paper (pdf).
fabrics - companies and past mentions
Fabric and other SSD ideas which defined 2016
Speed and Strength Metaphors in SSD brands
Everspin enters NVMe PCIe SSD market
8, 2017 - Everspin
it is sampling its first SSD product an HHHL NVMe
PCIe SSD with upto 4GB
ST-MRAM based on the company's own 256Mb DDR-3 memory.
The new nvNITRO
ES2GB has end to end latency of 6µS and supports 2 access modes:- NVMe
SSD and memory mapped IO (MMIO).
Everspin says that products for the
U.2 markets will
become available later this year. And so too will be higher capacity models
using the company's next generation Gb DDR-4 ST-MRAM.
comments:- Yes - you read the capacity right. That's 4GB not
4TB and certainly not 24TB.
would you want a PCIe SSD which offers similar capacity to a backed
RAM SSD from
DDRdrive in 2009?
And the new ST-MRAM SSD card also offers worse latency, performance and
capacity than an typical
using flash backed DRAM today.
What's the application gap?
answer I came up with is fast boot time.
If you want a small amount of
low latency, randomly accessible persistent memory then ST-MRAM has the
advantage (over flash backed DRAM such as you can get from
Netlist etc) that the
data which was saved on power down doesn't have to be restored from flash
into the DRAM - because it's always there.
The boot time advantage of
ST-MRAM grows with capacity. And depending on the memory architecture can be on
the order of tens of seconds.
So - if you have a system whose
reliability and accessibility and performance depends on healing and recovery
processes which take into account the boot times of its persistent memory
subsystems - then you either have the choice of battery backup (which occupies a
large space and maintenance footprint) or a native NVRAM.
cards will make it easier for software developers to test persistent RAM
tradeoffs in new equipment designs. And also will provide an easy way to
evaluate the data integrity of the new memories.
|What happened before?
|12 years ago in SSD news|
April 2005 -
Texas Memory Systems
(which made the world's fastest SSDs) offered the world's 1st performance
guarantees for enterprise solid state storage systems. |
said "if the RamSan unit does not accelerate the software application
performance to a level acceptable to the customer, the RamSan unit may be
returned within 30 days of delivery and Texas Memory Systems will refund all
related Texas Memory Systems hardware charges, minus a 10% restocking fee."
- the next box|
| Throughout the
the data storage market we've always expected the capacity of enterprise user
memory systems to be much smaller than the capacity of all the other attached
storage in the same data processing environment. |
new blog on StorageSearch.com
adapted memory systems - asks (among other things) if this will always be
Like many of you - I've been thinking a lot about the
evolution of memory technologies and data architectures in the past year. I
wasn't sure when would be the best time to share my thoughts about this one.
But the timing seems right now. ...read the
MLB Network uses Tegile flash storage in
their post-production environment. During the regular season, they need to
record all of the games and produce content for shows like MLB Tonight, The
Rundown, Intentional Talk, MLB Now, and Quick Pitch, which focus on the day's
activities and give a snapshot of what's going on around the league. In the
off-season, they produce ... other programming that goes behind the daily game
and into more of the storytelling about baseball. That's over 500,000 hours of
Director of Marketing Tegile
Systems in his blog -
Goes to Hollywood (March 7, 2017 )|
|"We are at a junction
point where we have to evolve the architecture of the last 20-30 years. We can't
design for a workload so huge and diverse. It's not clear what part of it runs
on any one machine. How do you know what to optimize? Past benchmarks are
Vaid, Distinguished Engineer, Azure Infrastructure - quoted in a blog
by Rambus -
new memory tiers for the data center (February 21, 2017)|
|RAM has changed from being
tied to a physical component to being a virtualized systems software idea - and
the concept of RAM even stretches to a multi-cabinet memory fabric. |
RAM really? - RAM in an SSD context|
|I think it's not too strong
to say that the enterprise PCIe SSD market (as we once knew it) has exploded and
fragmented into many different directions.|
|what's changed in enterprise
|The same memory block may
have different ECC codes wrapped around it at different times in its operating
life - depending how healthy it looks. And different ECC codes may be used
within the same memory chip at the same time.|
care management & DSP IP in SSDs|