| leading the way to the
new storage frontier||...|
SSDs - boring right?
after AFAs -
what's the next box?
3D nand fab
yield - the nth layer tax?
how fast can your SSD
who in the SSD market in China?
hold up times in 2.5" military SSDs
after 2017 - questions
re SSD's onward direction
consequences of the 2017 memory shortages
|ReRAM based architectures
for Processing-In-Memory (guide to papers and deep thinking)|
| Editor:- May 1 , 2018 - Processing in memory and
ReRAM are both making their mark independently as noteworthy technologies which
each promise new fashions in the shape of future memory systems design. But
how about combining both? |
A new paper -
Survey of ReRAM-Based Architectures for Processing-In-Memory and Neural Networks
(pdf) by Sparsh
Mittal, Assistant Professor at Indian Institute of Technology
Hyderabad summarizes the state of art.
In his abstract Sparsh says "As
data movement operations and power-budget become key bottlenecks in the design
of computing systems, the interest in unconventional approaches such as
processing-in-memory (PIM) and machine learning (ML), especially neural network
(NN) based accelerators has grown significantly. ReRAM is a promising
technology for efficiently architecting PIM and NN based accelerators due to its
capabilities to work as both: high-density/low-energy storage and in-memory
computation/search engine. In this paper, we present a survey of techniques for
designing ReRAM-based PIM and NN architectures. By classifying the techniques
based on key parameters, we underscore their similarities and differences."
the article (pdf)
Editor's comments:- It's fascinating to see
how researchers in computational memory architecture have blended techniques
borrowed from classical analog computers with pragmatic local digital cleanup
and pure digital logic to create hybrid analog digital computing elements which
make the best use of latency and resolution to create multiplier accumulator and
search by value blocks while using ReRAM.
My first reaction was like
that when I saw the specifications of the first DSP chips - not very good analog
combined with not very good digital - but from those earliest days we got new
ReRAM ML engines may have very niche uses and be
incredibly difficult to design but it only takes one or two killer applications
to make new technologies unignorable.
chips be made in the wrong place?|
editor - StorageSearch.com
- April 30, 2018 |
| Is a memory chip in Country A
worth the same in Country B?|
If supplies were plentiful, and if
there was efficient and effective competition, and low barriers to free trade
and market entry - then the answer would be:- Yes.
enough of those conditions seemed to prevail worldwide upto about 3 years ago
that if anyone had tried to create a new mainstream civilian memory company
(in the DRAM or
nand flash markets) then
the effort would have been viewed as maybe being nuts. Why bother? There would
have been little appetite to invest in such a new semiconductor venture. The IP
barriers alone were strong enough to deter such efforts and the risks and
rewards from the competitive side (plentiful cheap memory and "forever
downwards - like gravity" price projections) would have been sufficient
deterrents to such investments.
fast forward to today.
position power exposed by the memory shortages coupled with geopolitical
sensitivities which received serious airing and analysis in the
beauty pageant of who might be a fit buyer of Toshiba's memory business
exposed the strategic sensitivities of the memory business.
recent actions by US regulators to block technology sales to significant
China based technology companies which fell afoul of well known US sanctions
on 3rd party countries (for example
survival at risk amid US ban - said SeekingAlpha.com while
says US penalties are 'unfair'- said ChinaDaily.com.cn) coupled with the
recalculation of value effect inevitably inspired by asking what would happen
if something like recently imposed
tariffs on photovoltaics were to be applied on memory chips by any country
for any reason - is creating a climate in which the Country A versus Country
B question may change the assessment of investing in new memory companies to
include a stronger weighting to the geographic question of where the
customers are compared to the factories which make the chips.
assuming here that another factor in reopening this type of question is that the
imbalance between memory supply and demand may have changed from its
pattern of quickly rebalanced balance - to becoming (for maybe several
more years yet ) - unplentiful and higher priced memory becoming the new
normal. Here are some articles which discuss the temperature of thinking.
Older readers will
remember that the question of whether memory chips might need passports and
visas to travel from one part of the world to the other (and the related
question of what kind of buyer reception these coach class chip tourists would
get when they arrived) was for many decades the norm. It was only in the dotcom
era that we got used to just in time inventories for manufacturing and the
free movement of consumer grade technology parts zigzagging their way around
the factories of the planet like the shadows of drunken satellites.
- A blog on AnandTech -
DRAM Industry Spreading its Wings: Two More DRAM Fabs Ready (April 25,
2018) - says "Innotron Memory and Fujian Jin Hua Integrated Circuit, are
gearing up for volume production of computer memory in the coming month. Both
manufacturers were founded with the help of the Chinese government, their output
will initially be consumed locally."
there isn't enough effective competition in the memory market and it took
the lack of headroom in supply to show our vulnerability to these strategic
links and dependencies.
electrons more or less?|
(sensitive nm meets 2/3D)
|Editor:- April 26, 2018 - The sensitivity of
progressively smaller nand flash cells to actual trapped charge (as measured
by the number of electrons) has an immediate and direct bearing on the
repeatability of data reads compared to writes. Hence the original need for
noise tolerant ECC technologies. And also playing their part - with the
passage of time - the corrosive attacks on
which accumulate in their effects due to leakage, disturbance etc - not to
mention the damoclesean biggie of write cycle damage aka
(endurance) - have been summarized by many useful shortcuts in the past as part
of the ongoing narrative associated with the pairing of
complexity growing as the necessary mitigating accompaniment (like a data
integrity cop) to memory cell sizes shrinking - in the quest for ever cheaper
You can see some good recent examples of how these
relationships pan out in a recent article by Andrew Walker -
Future of Non-volatile Memory (April 11, 2018) which is part of a series
he's written on 3DInCites.
other things Andrew notes that...
- In 2D nand flash at the 16nm nand flash level, less than 10 electrons will
cause a 100mV threshold voltage shift.
The main thrust of Andrew's article is to indicate that even
3D nand flash has shrinkability limitations because of the damage caused by
writes. And this is one of the reasons that some memory companies have
long been looking at
other technologies which don't rely on trapped charges although regarding
application roles he says - "STT MRAM is emerging as the embedded
nonvolatile memory of choice for advanced silicon processes. It is also being
touted as a replacement for SRAM and, with a small enough memory cell,
DRAM. It is unlikely to
replace nand flash." ...read
- Whereas in 3D (skyscraper) nand flash the total number of electrons
stored in the silicon nitride reservoir (occupying a similar 2D planar
footprint) is much greater resulting in more stability in the threshold
storage reliability -
news & white papers
memory sale reenters What If? zone|
|Editor:- April 24, 2018 - The
of Toshiba's memory business still has the potential to unravel
according to various reports which note that regulatory delays have delayed
completion of the deal with Bain (announced
into a different market territory in which Toshiba's parent no longer needs the
proceeds to remain solvent and the value of the flash memory memory business
is not the same as it was.
- "Toshiba and Bain want to finalize the current agreement, but they can't
Weekly - "Activist investor Argyle Management of Hong Kong says the
memory unit could fetch $40 billion in an IPO, whereas the Bain/Hynix sale will
only bring in $18.6 billion."
of Toshiba's forced memory sale
- "...the tech giant has missed a deadline of March 31, due to Chinese
antitrust regulators, which are yet to permit the acquisition to take place."
shows processing in memory can save power |
|Editor:- April 2, 2018 - Here's a new acronym
for you and also a new way to think about the value of offload logic in memory
arrays too. They both appear in a recent paper -
Workloads for Consumer Devices: Mitigating Data Movement Bottlenecks (pdf)
which started as a research project in Google.
- First - the new (to me) acronym:- PIM - processing in memory.
is a synonym for "in-situ SSD / memory processing".
a concept which has been associated with creeping refinements and various
different implementations since it first came into common usage as one of the
ideas in 2014.
The authors say... "Our
analysis shows that offloading the primitives (for widely-used Google consumer
workloads) to PIM logic.. eliminates a large amount of data movement, and
significantly reduces total system energy (by an average of 55.4% across the
workloads) and execution time (by an average of 54.2%). ...read
the article (pdf)
- Second - the new idea:- saving power.
We're used to the idea that
PIM (or in-situ memory processing) can provide substantial acceleration for
applications when the core logic has been custom tuned for a particular set of
The new thing is that PIM can provide a worthwhile
reduction in electrical power too - by reducing movements of data to locations
outside the associated memory array. And a power optimized design can
deliver useful acceleration at the same time.
should we set
higher expectations for memory systems? (2016)
|Editor:- March 21, 2018 - Re newsletters and
blogs:- Carey Hedges
founder - HN Marketing - told me
22 years ago (when I was starting an ezine called
MarketingViews with tips for
marketers to interface better with readers of my enterprise server
preceeded StorageSearch) that
most most newsletters (and by inference - in today's world - blogs) rarely make
it past 3 editions.|
So this month when I saw Rohit Gupta - Segment
Manager, Enterprise Storage Solutions at SanDisk saying on
linkedin - "My 4th blog- NVMe Part II" - about his -
Dive - 6 NVMe Features for Enterprise & Edge Storage - I was reminded
of that earlier 1996 PR
related conversation and congratulated Rohit on having got to #4.
said it looks like a valuable repository of ideas for people who are coming
into the enterprise NVMe thought space.
In fact I had been alerted to
one of Rohit's earlier blogs in November 2016 by Rebecca Parr who
had been a customer of mine 6 years ago when she was Marcoms Manager at Virident Systems
- which pioneered spikeless performance in fast enterprise PCIe SSDs using its
architecture controller design - which was rare at the time but is now the
kind of thinking which has become mainstream in large scale flash arrays.
strange how sometimes it's a series of accidental connections - spaced out
over time - which trigger me to write something here - which in this case is
to say - you might already have read plenty of blogs about NVME SSDs - but
here's another one you might want to read to. And getting to #4 is noteworthy.
Among other things Rohit says..."Beyond performance, the NVMe
protocol also supports IO multipath, which is particularly useful for redundancy
and load balancing purposes... NVMe namespace sharing combined with multipathing
builds the foundation for enterprise-class storage systems." ...read
the article, more
blogs by Rohit Gupta
|improving the latency and
energy of commodity DRAM using adaptive architecture - new research|
|Editor:- March 13, 2018 - The enterprise flash
SSD market has a long history of design advances which came from the cumulative
understanding gained by the independent characterization of memories - this
mostly having been done by independent SSD and
rather than the original manufacturers of the
themselves. But I haven't heard much in the past 10 years about similar
activities related to DRAM - and part of the reason may well be that the
companies which used to do such in depth RAM characterizations in earlier phases
history - the RAM SSD
companies like Texas Memory Systems and Solid Data Systems - had mostly stopped
design work on new high capacity RAM SSDs by about 2008 due to the competitive
advantages (in a storage array context) of
I was surprised and delighted to come across a new
Understanding and Improving the
Latency of DRAM-Based Memory Systems (pdf) - by Kevin K. Chang
- Carnegie Mellon University (submitted December 2017 as part his PhD) which
document (in 200 pages approx) describes his ongoing work and insights into
DRAM characterization and system optimization opportunities.
research measured and analyzed the relationships between supply voltage and
latency in commodity DRAM and explored ways to optimize latency while still
maintaining data integrity and reducing power consumption. Among several schemes
also described in this paper:-
- an adaptive latency scheme he calls "Flexible-LatencY DRAM (FLY-DRAM)"
which leverages the variation of latency that occurs within different
locailities of DRAM chips.
...read the article (pdf)
- Voltron - a new mechanism that improves system energy by dynamically
adjusting the DRAM supply voltage using a new performance model which is based
on a better understanding of the relationships between cell retention, refresh
rate, temperature and other system factors.
also:- what's RAM really? - RAM
in an SSD context
| search StorageSearch archive using:-
|no more anti-trust wait states|
Toshiba Memory sale clear to close June 1
Editor:- May 17, 2018 -
it has received all required regulatory approvals for the sale of Toshiba
Memory Corp. The sale to the Bain led consortium is expected to close on June
SSD beauty pageant - timeline of stories
Crossbar will demonstrate ReRAM AI accelerator chip
May 14, 2018 - Crossbar
that it will demonstrate a test chip showing the capabilities of its ReRAM
technology for AI in the form of a facial recognition accelerator at the
Embedded Vision Summit
next week in Santa Clara, California.
VP Marketing at Crossbar said - "The biggest challenge facing engineers
for AI today is overcoming the memory speed and power bottleneck in the current
architecture to get faster data access while lowering the energy cost. By
enabling a new, memory-centric non-volatile architecture like ReRAM, the entire
trained model or knowledge base can be on-chip, connected directly to the neural
network with the potential to achieve massive energy savings and performance
improvements, resulting in a greatly improved battery life and a better user
Editor's comments:- It's a great idea for Crossbar to
integrate the capabilities of their SoC compatible ReRAM technhologies into a
demonstration accelerator like this as it cuts out a lot of guesses and the
requirement to imagine what can be done with the new architectures so enabled.
an example of this powerful business development idea from
all know (or have heard of)
They're the company(founded in December 2005) which transformed the enterprise
server market from SSD deniers into born again
PCIe SSD acceleration
evangelists. Fusion-io was acquired for $1.1 billion in
might be surprised to know that despite its huge market impact Fusion-io's
original business plan wasn't the one which they later followed.
they became successful the founders told me their original idea had been to
operate as a software and IP licensing company.
And they said that
their prototype PCIe SSD cards - the ioDrives - had been intended simply to
demonstrate the concept of what Fusion's software and architecture could do.
The founders had expected that server makers would license the technology but
build their own cards. However, when server customers saw what this acceleration
technology could do for their own server sales (or those of competitors if they
adopted it) they chose to buy cards instead. And that's how the PCIe SSD market
It's possible that with the AI memory accelerator market
we're going to see application specific products born out of demonstrators which
are too good to stay in the labs. And that's a proposition which I also
mentioned in my recently completed blog -
are we ready for infinitely faster RAM?
Mercury says TLC can be used in avionics (if you know how)
May 1, 2018 - Mercury
it is offering TLC flash in a new SSD on a chip (22mm x 32mm BGA) for secure
storage roles in SWaP constrained environments such as aircraft, unmanned
systems and mobile ground applications including secure laptops and tablets.
says - "While TLC flash technology is ideal for high-capacity data storage
in a smaller footprint than MLC and SLC technologies, its reliability and
performance in military operating environments has been disputed until today.
Mercury has eliminated these threats by custom-engineering a new variant of its
ARMOR processor specifically for this new commercial memory technology enabling
it to operate in SLC mode for high reliability and long-term endurance while
sustaining high-speed read/write operations."
It is a notable milestone that a
military SSD company like Mercury is using TLC in SLC mode for secure
applications. The technique of virtual SLC and its reliability aspects is one
of several described in this academic paper
of Techniques for Architecting SLC/MLC/TLC Hybrid Flash Memory based SSDs (27
pages pdf) - which I mentioned in a news story
the adoption of TLC nand (or any new mainstream memory) into successive markets
SSD history demonstrates a timetable of adoption determined by how long it
takes for the new devices to shake out processing fluctuations and how long it
takes for application markets to deteremine they're good enough.
consumer SSDs used
to be the first target for new memories . Because consumer products have lower
standards. Then some time later enterprise, followed by
military (subject to temperature compatibility) and maybe later still - medical
markets. At the latter end of this list the later adoptions are due to longer
design times (to evaluate and integrate with other reliability features) and
longer customer qualification times. However in recent years the order of memory
adoption has changed with big
cloud users jumping
right in at the start contemporaneously with consumer. Clever cloud architects
can live with and work around infant media defects - and are willing to put
design effort into using new technologies - provided that the system
benefits provide a statistically significant improvement in their systems costs.
a yardstick for how long these successive adoptions can take...
2018 now and this is the first news story about a significant
military SSD using
TLC. In my timeline
for the enterprise - it was 2015 when TLC was considered good enough to
ship in high quality enterprise all flash arrays.
did leading DRAM makers collude to protect high prices?
May 1, 2018 - One of the almost predictable
of the memory shortages and price hikes centered around 2017 has been
greater scrutiny of the memory market by regulators and now -
class action lawsuit (pdf) filed against the 3 largest DRAM makers
(Samsung, Micron, and Hynix) which dominate the market.
things the plaintiff document alleges - "Defendants combined and
contracted to fix, raise, maintain, or stabilize the prices at which DRAM was
sold in the United States from at least June 1, 2016 to February 1, 2018 (the "Class
Period"). Defendants' conspiracy artificially inflated prices for DRAM
throughout the supply chain that were ultimately passed through to Plaintiffs
and the Class, causing them to pay more for DRAM Products than they otherwise
would have absent Defendants' conspiracy."
As with many legal
documents this one is a long read. In it the plaintiffs suggest that these
memory companies communicated their strategies by means of public investor
statements - "During the Class Period, Defendants continued their efforts
to coordinate their DRAM supply decisions, as reflected in public comments by
Defendants that urged each other to keep industry supply in check. Defendants
each made public statements affirming their commitment to the common plan to
curtail supply, and to not compete for each other's market share by supply
expansion. For example, Defendants informed the other Defendants through public
statements, that they would keep total wafer capacity flat in order to constrain
DRAM supply growth, they would only grow DRAM supply between 15-20% in 2017,
even as DRAM demand grew 20-25%, and that they would refrain from taking each
other's market share." ...read
the lawsuit (pdf)
Editor's comments:- The tactics each sales
force used to decide allocation between different customers and bundling deals
(if any) may come under scrutiny. Dealing fairly in a shortage requires very
strong controls to avoid tipping into anti competitive behaviors.
history of the memory market does include proven examples of past
price fixing. You can read more about them by visiting
https://www.justice.gov and searching for
RAM news - ain't what it used
history of understanding and misunderstanding SSD pricing
Spin Transfer Technologies says its breakthrough tweak to MRAM
structure will enable new uses in datacenter ASICs
30, 2018 - Although it can be an enigmatic challenge figuring out what the
market positioning and application roles of some alternative nvms really is -
Spin Transfer Technologies
left no room for doubt in press releases today about recent enhancements in
their (ASIC compatible) MRAM technology.
SRAM is one of the target markets. STT says its improved MRAM - with
Spin Current (PSC) structure - lengthens retention time by a factor of over
10,000 (1 hour retention becomes more than 1 year retention) while
reducing write current.
STT says the new PSC structure is compatible
with most MRAM processes, materials and tool sets and adds only about 4nm to
the height of the pMTJ deposition stack. PSC decouples the static energy
barrier that determines retention from the dynamic switching processes that
govern the switching current. Among the improvements:- PSC reduces read
disturb error rate up to 5 orders of magnitude.
no magic bullet to shorten how long it takes to test and verify
Bullet Train SSDs
Editor:- April 26, 2018 - Aspects of the
journey to get TB industrial SSDs approved for use in
bullet trains were
which beat 7 other competitors and has been supplying batches of its SSDs for
onboard use in these world's fastest running (200 mph) passenger trains
CoreRise's Product Manager said - "Before mass
production, there are more than 500 items of the tests in 57 categories to be
passed. Moreover, the test standard is very strict. It need not only to conform
to the customer requests or nominal standards, but also enough safety
redundancy, and guarantee the reliability and consistency of technical
Editor's comments:- The interesting thing in this
story is how the customer qualification processes and verification tests for
reliable operation in harsh environments for electronics take longer
than the original design of the SSD. That's one of the distinguishing
characteristics of the industrial SSD business and sets it apart from consumer
and enterprise markets.
the business of custom
Hynix says DRAM prices will stay high due to continuing growth
Editor:- April 24, 2018 - SK Hynix today
announced it will
enter the enterprise PCIe
SSD market as one of several plans to diversify its product portfolio.
its DRAM business in the quarter ended March 31, 2018 - Hynix reported - "Quarter-over-quarter,
DRAM bit shipments decreased by 5% due to weak mobile demand and lessened
production days nevertheless of sustained robust server demand. However, the
average selling price rose by 9% through evenly increased price for all DRAM
Hynix said in a related
call (audio) /
- on SeekingAlpha.com).
"(Global) demand for DRAM is expected
to grow by low 20% level this year. Supply growth will not be enough to ease the
price supply situation, even if suppliers accelerate their migration to 1Xnm
and continue to add wafer capacity by increasing investment."
the NAND market the demand growth continues around SSD. Enterprise SSD in
particular is expected to drive growth." ...read
Editor's comments:- As the continuing ripple
effects of the
shortages are now in their 3rd calendar year of impact you have to ask
yourself - is this the new "business as usual?"
I said on
For those suckled on the "memory as commodity" business model of
semiconductor product marketing the current surreal competitive landscape must
make them feel they were suckered.
view of memory boom bust business cycles.
a NAS / AoE view of no SPOF
Editor:- April 19, 2018
- No Single Point of
Failure + Golden Images is one of a series of recent blogs by Brantley Coile,
Founder/CEO at Coraid (see also older
Coraid 2009 to 2015)
about topics mostly related to good software design in the context of network
Editor's comments:- Brantley's musings about the
storage software industry from a historic perspective have become a regular and
enjoyable read for me in recent months. He's written about topics as diverse as
the history of hard drive interfaces to the ideal size of software teams.
availability enterprise SSDs
what's the value of infinitely faster RAM?
April 17, 2018 - A recent blog on StorageSearch.com
- are we ready for infinitely faster
RAM? - asks - among other things - what's the value of having very much
Looking at past decades for clues - there was limited
scope for being able to change the world of computing by simply having faster
memory. Even if you could go back in time and take compatible chips or SSDs
from today's market and retrofit them - you wouldn't change very much - because
the nature of applications and bottlenecks were a quagmire of limited thinking
and finite lookalike expectations.
The enterprise computing
market of today is different as it's not just the actions of people
which create workloads but the economic value of machines which create
data from inventing and discovering new relationships in data anywhere which
can be leveraged into monetizable opportunities - provided that the results can
be computed quickly enough.
But would you recognize a new memory
accelerator if you saw it? Faster memory systems may nhot even look like
traditional memories and their "fastness" will be application and
context dependent. ...read the
unveiling a 200TB hard drive for cloud apps - the Titanosauros 1
April 1, 2018 -
Triassic Peripherals today exited
stealth mode and
its first product - a 200TB hard drive aimed at cloud applications.
Titanosauros 1 has a dual port
interface, spins at 5,000
and comes in a
8" form factor. Triassic says that a 1U rack can provide 1 petabyte of raw
storage. Despite being optimised for electrical power the outermost cylinders of
the drive can provide data throughout faster than a 15k 2.5" drive.
Pricing data is available on request.
One of the co-founders - Fred Spinstone said that in a
company his team had been supporting legacy EOL 8 inch IPI-2 hard drives
for military customers but using flash inside. (Similar in business concept to
the EOL mitigation
solutions offered by Reactive
Group and others.)
The idea for Triassic was - hey let's put a
hard drive in a hard drive enclosure. Random access time isn't great at 50 mS
but in a cloud system
the metadata knows where the chunky data lives and systems performance is tiered
through servers and flash anyway.
The patents for the 8" platters
have expired so Triassic isn't expecting patent suits from the usual suspects.
comments and more info
HDD articles & news on
Micron hints at AI assisted porting of compute intensive
models to FPGA-inside memory array accelerators
Editor:- March 30,
2018 - A new blog -
Memory Matters in Machine Learning for IoT - by Brad Spiers - Principal
Solutions Architect, Advanced Storage at Micron reveals
significant progress in software tools development which is intended to reduce
the time and complexity of porting machine learning models onto in-situ memory
accelerators implemented by FPGAs embedded into DRAM arrays. The blog makes
specific reference to applications with Micron's PCIe connected
Computing Solutions (pdf) - which provide FGAs integrated with either DDR-3
or HMC and a design, simulation and runtime support tools.
things - Brad Spiers says... "Micron is engaged with machine learning
experts, like FWDNXT, to enable seamless transfer of machine learning models
onto FPGAs. Models are first created in the normal way, using the same software
that data scientists use every dayCaffe, PyTorch or Tensorflow. The
models output by these frameworks are then compiled onto FPGAs by FWDNXT's
Snowflake compiler." ...read
Editor's comments:- creating AI based software
productivity tools which could cut many months off the design time to create
FPGA based in-situ memory based application accelerators is an extreme case of
Defined Software. Such developments could become as significant for
startups creating blue sky HPC based knowledge enabling tools as was the
availability of microprocessor development systems for the democratization of
digital electronics in the 1970s.
Gb NRAM chips could sample in 2019 - says Nantero
March 29, 2018 - NRAM (a non volatile memory technology which has been in
commercial development since 2001) by Nantero may be
sampling next year with chip densities of 16Gbit - according to an interview
CEO says NRAM production is close on eeNewsAnalog.com - which says the
memory technology supports 5nS write speeds and retention of more than 10
years at 300°C.
Nallatech enters the in-situ SSD market
March 19, 2018 - A new entrant to the in-situ SSD processing market is Nallatech which has
series of NVMe storage accelerator modules which include application
programmable FPGAs closely coupled with memory. Among the models announced
- HHHL PCIe SSD accelerator featuring up to 4x M.2 NMVe SSDs and 4GB
SDRAM coupled on-card to a fully programmable Xilinx FPGA.
provides consultancy services assisting customers in the porting, optimization
and benchmarking of applications executed on Nallatech FPGA accelerators.
- A 2.5" U.2 hot swappable fully-programmable accelerator features a
Xilinx Kintex UltraScale+ FPGA and 8GB DDR4 SDRAM memory.
Nimbus samples 100TB SAS SSDs
Editor:- March 19, 2018
- Nimbus Data Systems
has made another significant advance in the development of multipetabyte
energy-efficient solid state storage racks with the
today that it's sampling 100TB 3.5
SAS SSDs with
DC100 has balanced performance 100K
IOPS R/W and
up to 500 MBps throughput and consumes 0.1 watts/TB - which Nimbus says is 85%
lower than competing drives used in similar array applications - such as the
Micron's 7.68TB 5100
Nimbus says the use cases are:-
is expected to be summer 2018.
- Data centers and cloud infrastructure (scale, efficiency)
- Scale-out systems (object and file storage)
- Edge computing (IoT, embedded applications)
ExaDrive technology and
I asked Thomas Isakovich,
CEO and founder of Nimbus some questions about the new ExaDrive technology.
Editor - The
announced by your flash partners last year used planar 2D flash. Does
the 100TB family use 3D flash? Knowing the answer one way or another will
enable some people to make their own judgements about incremental upsides in the
next year or so's roadmap. And also form a view about specification stability
Tom Isakovich - Yes 3D flash for the ExaDrive DC.
- The issue of cost per drive is an interesting one too. But the companies you
were working with last year have experience in processes which can produce a
high confidence reliable SSD for high value, mission critical markets (like
military) in which the reliability of every single SSD is critical. So my guess
would be that for integrators who have a serious interest in the ExaDrive DC100
they will be looking at the cost of drive failures on a system population
basis and the value of less drives and less heat per TB is more important
than the headline cost of a single failed drive.
Tom Isakovich - I have
an interesting subject for you to consider on the topic of "reliability".
Namely, is an SSD any less reliable than an all-flash array? I contend that it
is not. In fact, an SSD is more reliable.
- Our ExaDrive DC has flash redundancy internally, with the ability to
lose about 8% of flash dies without any downtime, data loss or capacity
reduction. This is analogous to
RAID in a traditional
all-flash array that protects against media failure. So on the notion of media
redundancy, they are equally redundant.
I'm thinking more on this. But empirically, an SSD is more
reliable than a System. The user can achieve desired redundancy in their overall
architecture, taking this into consideration.
- The ExaDrive DC has a 2.5 million hour MTBF with no moving parts.
That is about 6 times longer than the typical all-flash array (which includes)
many active and moving parts. All-flash arrays have integrated power supplies,
active controllers, fans, and other components prone to failure.
what's the cost of deciding what is to be done?
March 10, 2018 - "In any computer architecture, it takes a lot more energy
to fetch and schedule an instruction than it does to execute that instruction"
Danilak, founder and CEO - Tachyum
- in his new blog -
Law Is Dying - So Where Are Its Heirs? - which among other things - shows
how the transactional costs of fetching instructions and data in classical
Editor's comments:- the needs of the cloud, coupled
with growing understanding between the tradeoffs between
and energy consumption since the widespread deployment of solid state storage
have been the inspiration for rethinking all the classical elements of computer
architecture. Some of that thinking has been rooted in the memory space but
just as significant has been a rethinking of what processors should aim to do.
Tachyum announced external funding for its Cloud Chip
last month. And
as with previous disruptive technologies - part of the warm up process for the
market - is to educate more people about how things work now so they can better
appreciate what the new technologies offer.
WDC's enterprise flash hopes which were pinned on SanDisk are
evaporating - says The Register
Editor:- March 8, 2018 - If you're
interested in seeing market share charts for the biggest enterprise SSD
drive companies then take a look at a new entertaining article -
years and $19B later: What happened to WD's SanDisk enterprise flash advantage?
on The Register by Chris Mellor who
says among other things:- "WDC bought SanDisk in October 2015 for $19B. The
deal closed in May 2016. Since then SanDisk CEO Sanjay Mehrotra and a string of
other execs have joined Micron, now run by Mehrotra. It's tempting to see much
of Micron's gain as WDC's loss." ..read
Editor's comments:- If the enterprise SSD market
was expected to stay the same in terms of architecture, software and purpose
then today's market shares would mean more.
But as you know I think
the next advance towards supporting big memory apps may make AFAs and enterprise
SSDs seem as quaint as
Enterprise SSDs won't EOL overnight -
suggests that AFAs and other storage arrays will continue to exist for another
decade or more but as a progressively declining percentage of the
memoryfication systems market. And eventually enterprise storage systems may
just head towards being a legacy emulation in software defined memory
systems and the cloud rather than real actual Storage PHY and on-premises
Going back to
acquisitions come patents too. Maybe they will prove to be more valuable than
acquisitions - 2000 to 2017
mouse site readers not scared by memoryfication of content
March 1, 2018 - Strange as it may seem - article views on StorageSearch.com in February 2018
were 23% higher than the year ago period.
Having written for
so long about SSDs and the impact of flash on the enterprise it would be ever so
easy to just rest awhile longer in those comfortably worn grooves.
be truthful it's been a struggle for me to visualize and try to anticipate the
important next step trends of the memoryfication of everything. Unlike storage -
which was relatively simple and well bounded by latency tiers and interfaces
and form factors - the new threads of architectural data system change are
appearing in disparate places - such as memory systems, inside the dark spaces
of processors and slashing across the legacy imaginings of system software.
Unscrambling the next generation possibilities isn't straightforward -
because ever since
2016 new developments in rackmount boxes and NVDIMMs and SSD controllers
have not been so isolated in their immediate market impact as they used to
So I'd like to say thanks to my readers for keeping up your
interest and thanks too to the many industry muses who by what they do and say
and talk about - keep me thinking about the next thing. And thanks too to my
past and present) without whom my web publishing career would have been 22
Mays of yore in
Technology launched WhatsHot SSD - a hotspot analysis and tuning
tool for fast rackmount SSD accelerators. |
It would be another 6 years
before the first storage arrays became available which integrated automatic
caching of data between solid state storage and hard drives. That was the
XcelaSAN launched in
September 2009. But it wasn't till
new SSDcentric software companies were entering the market at the rate of one
each week) that the SSD
software market became valued enough by
investors and wouldbe
its HLNAND flash technology which could sustain 800MB/s.
announced the first branding program for SSD controllers.
marked a turning point in how flash controller technology was viewed by the
mainstream storage market. In less than 3 years (2007 to 2010) the perception
changed from "who cares?" to "You care!" - which I wrote
about in Imprinting
the brain of the SSD.
May 2013 - Micron
began sampling a new hot swappable 2.5" PCIe SSD with 1.4TB MLC
capacity and 750K R IOPS.
May 2016 - Symbolic IO emerged
from stealth mode unveiling an enterprise server/storage architecture which
leveraged embedded persistent memory coding to provide data materialization,
dematerialization and acceleration.
- what's next?|
| Throughout the
the data storage market we've always expected the capacity of enterprise user
memory systems to be much smaller than the capacity of all the other attached
storage in the same data processing environment. |
classic blog on StorageSearch.com
adapted memory systems - asks (among other things) if this will always be
Like many of you - I've been thinking a lot about the
evolution of memory technologies and data architectures in the past year. I
wasn't sure when would be the best time to share my thoughts about this one.
But the timing seems right now. ...read the
|If you're one of those who
has suffered from the memory shortages it may seem unfair that despite their
miscalculations and over optimimism the very companies which caused the
shortages of memory and higher prices - the major manufacturers of nand flash
and DRAM - have been among the greatest beneficiaries. |
of the 2017 memory shortages|
Defined Software - a new market in the making|
|There's a new software idea that's been
experimented on in the AI skunkworks in the cloud and as patentable secret
enhancements in next generation embedded processor designs. This new concept and
exciting new market (for the VCs reading this) will be more significant than a
new OS and will mark a break in the way that the enterprise thinks about
You had had plenty of warning about the new chips but
memoryfication doesn't stop with faster storage. The idea didn't have a name
when I started writing about it. But what it should be called is obvious.
Defined Software doesn't have to work at being backwards compatible
because the legacy storage industry will import and export to it if they want
to play in data's future.
See more about this in my blog -
Memory Defined Software. (Sometimes you can change the world with software
which breaks all the rules - if you can find the right platform to run it on.) ...read the
SSD news in Aprils of yore|
||Texas Memory Systems
offered the world's first performance related guarantees for SSD products. |
promised they would outperform any competing storage system, or meet the
customer's agreed application speedup expectation - or the customer would get
their money back.
This approach was partly inspired by market
research data from
SSD User Survey - which said that users would be more likely to try SSD
systems if vendors offered such guarantees.
The perceived risks for
users associated with buying (what seemed to be) relatively expensive
enterprise SSD systems from (mostly little known) vendors to obtain business
benefits from poorly understood and likely-to-change installed assets
-based on pre SSD thinking - continued to dampen adoption of SSDs by
mainstream users for the next decade - because it required considerable
technical expertise to understand what was being offered.
the enterprise flash market chose the route of creating plausible sounding
pricing models as the way to bypass technical performance unknowns - a
marketing trend which I wrote about in my article -
the Astrological Age of Enterprise SSD Pricing.
utility pricing model - based on memory pricing roadmaps - didn't prove to be
sustainable when memory costs
in 2017) this didn't halt the onward progress of memoryfication because the
widescale adoption of flash meant that flashless users could see evidence of
the benefits in industries which they understood.
Technologies became the first enterprise SSD manufacturer to display end
user pricing online for the full range of its SSD arrays. |
date the volatile nature of memory pricing and fear of price led competition
had meant that most SSD oems declined to publish any pricing data.
||Seagate filed suit against
STEC alleging patent
infringements related to hard disk interfaces. |
The case was seen by
many SSD proponents as a potentially deadly but seriously misguided missile
launched at the entire SSD market. It was later dismissed without merit. And
later - helped by the acquisition of LSI's SSD business - Seagate itself became
a significant supplier of enterprise SSDs and SSD controllers.