SSD silos in the
enterprise, iSCSI SSDs,
SSD pricing |
|Micron turns up the heat
for adoption of 2.5" PCIe SSDs|
Editor:- May 3, 2013 - Micron recently
it's sampling a new model in the
hot swappable 2.5"
PCIe SSDs market - the
has upto 1.4TB MLC capacity and can deliver 750K R IOPS. Micron specifies
endurance as "50PB of drive life".
Micron is also offering half height, half length PCIe SSDs in the new range -
but to my mind it's the 2.5" drives which are the significant part of this
I wrote about the impact these new drives could have
on traditional PCIe SSDs and
SAS SSDs in an
article 12 months ago.
summarize the main points in that... the new form factor for PCIe will displace
high end SAS SSDs and likely make the 12Gbps SAS drives the last generation of
SAS as "performance drives".
SAS SSDs will in turn replace
SATA SSDs as the
removable drive of choice in traditional legacy fast-enough enterprise
The new 2.5" PCIe SSDs will open up new markets in cost
sensitive incrementally upgradeable fast SSD racks.
At the high end of
the server side accelerated market, however, and particularly in
dark matter data
centers where the rack is seen the replacement unit - I'm sure that good old
PCIe SSD cards and modules will continue to hold their ground - because they
have lower packaging costs and can be designed to be more efficient than smaller
earlier this week - traditional PCIe SSDs will also facing pressure from
storage SSDs. But MCS won't impact 2.5" PCIe SSDs.
you start selling shares in any particular company - I'm talking here about
market juggling and realignments which will take 2-3 years to have a material
affect on existing market sizes and revenue. These changes won't happen
overnight. And these game changers in the
enterprise SSDs market
aren't taking part in the context of a zero sum game. The enterprise SSD
universe is expanding.
And here's another thing.
Last year I
told Micron's top SSD marketers that they weren't in tune with the needs of
enterprise SSD specifiers - because they had hopelessly slow and antiquated
processes for extracting technical information of the type that serious buyers
They seem to have taken those criticisms on board - because
now you can swim around in the info they've got about their new enterprise SSDs
on their web site - without having to sign NDAs and without waiting weeks to
talk to the person who knows what's missing on the datasheet. Still some details
missing - but it's a vast improvement on what they were doing before.
of you may think it's ironic that it's not Micron who's doing the flash thing
for memory channel SSDs. But bear in mind that semiconductor companies have to
feed the fab. And their priorities are to engage in established markets where
there is already known demand for millions of chips. Big memory companies don't
usually get involved in blue sky system innovation - except in
ORG type wolf packs.
Micron's got its own thing going with hypercube memory. And - as I've
said before - if that flies - it's another gating point for flash (if flash is
still around when that happens).
Fusion-io positions ioScale for huge SSD installations
January 16, 2013 - Fusion-io
has released a new PCIe
SSD called the ioScale
(3.2TB on a single half length PCIe slot) which is aimed at technically savvy
customers who have the potential to use thousands of cards in their
installations in new
dynasty enterprise SSD apps.
Pricing is under
$3,900 / TB and the minimum order quantity is 100 units.
comments:- When you first look at this product - you might be tempted to
think - So what? - isn't it very similar in capability to other products which
FIO (and others) have shipped already?
In one way you'd be right. The
ioScale's hardware design is based on FIO's experience in making low cost PCIe
SSDs for the workstation market - which is as close to
price pressure as FIO gets at the present time.
But the ioScale is
aimed at a special class of enterprise super users - whose apps and companies I
call:- new dynasty
and dark matter
Fusion-io told me
that when they did market research into the kinds of customers who were already
using their SSDs they discovered the big enterprise SSD customers could be
segmented into 2 groups which superficially had similar performance needs - but
were very different in the ways in which they valued issues such as:-
- compatibility with traditional software apps,
- how they handle reliability,
- how often they refresh and replace their infrastructure.
traditional enterprise customers have the profiles which everyone in the
industry knows about and aims their products at - but the new type of enterprise
customers have needs which are only starting to clarify - and for this latter
type of customer - SSDs are a strategic business enabler - because they can
convert efficiencies in raw computing technology into real competitive
- how they assess the cost / benefit of features within SSDs
Fusion-io is one of the few companies in the world which
already has a set of these latter cloud / data factory economy customers
who each have already got thousands of high performance PCIe SSDs - and who
have the ability to scale up substantially if their requirements are met and the
SSD enabled economy grows in the directions expected.
Rick told me that
these customers do want
performance, and low cost - but they don't need many of the bundled frills
which are deemed to be necessary for traditional enterprise SSD customers
legacy apps report faulty drives they change the drive or the rack. When
uber new dynasty SSD
users report faults - they route around them. Then when the time comes to
upgrade the CPU and storage capacity per square foot of that region in the
datacenter - the whole lot is forklifted out and replaced - faulty and unfaulty
racks - makes no difference.
Also - in these apps - hot pluggable
drives are a frill which is simply not worth paying for.
matter SSD customers - at which the ioScale is aimed - also know much more about
the technical limitations of their infrastructure - and have the technical
expertise to change things to suit them better - if they think it's worthwhile.
So - for example - the ability to dive into SSD APIs and change their apps
code to get speedups or other new functionality - is something they will do -
whereas traditional enterprise customers prefer all new hardware to work with
pre-existing software in a tweak-free environmoent.
conversation with Rick White - I referred back to the
software (which FIO launched in
August 2012 -
and which enables users to convert a standard server and a bunch of
PCIe SSDs into a
traditional SAN compatible
My assessment of that product shared with readers at the time -
was that if it satisfied the needs of a small number of super users - who could
each buy maybe hundreds or thousands of such systems - that made it worthwhile
for FIO to bundle the concept and launch it. I thought the analysis I had seen
in other places - which compared it to traditional rack SSDs was completely
missing the point.
Rick confirned my analysis was closer to the mark -
and many times in our discussion we returned to the problems in the SSD market
caused by faulty and incomplete
market research and
mistaken understandings of what the real issues in the market were.
way of summarizing this is - that if you ask a bunch of people who go to a
trade show - what do you think about SSDs? - you're going to get a different
result to when you talk to people who are already deeply engaged in the SSD
market, have already done a lot of SSD projects and who spend nearly all their
waking hours thinking about what more can they do if they had even better SSDs?
not that the traditional market research gives you the wrong answers - it's more
that - if you're not in the right place in the SSD market then you don't
understand enough to pose the right questions - and you probably don't have
access to the people who will ultimately decide the answers.
isn't the only SSD company who is getting value business insights by
researching its strategic customers.
I reported last year that
SanDisk had adapted its
approach to enterprise customers by deciding to support competing hardware
And there are many more examples I could mention if I had the time.
Strategic Transitions in SSD
Editor:- December 28,
2012 - the new home page blog on StorageSearch.com is called
Transitions in SSD.
AI in the cloud needs SSDs
September 28, 2012 - "Consumer
products are moving more and more towards that touch of artificial intelligence
and in particular speaking to your devices and having your voice sent off to the
cloud, recognised and analysed on good computers there and transmitted back"
- said Steve Wozniak
Chief Scientist at
Fusion-io in the
interview / article -
deluge - the need for speed
"By 2014 - 50% of all workloads will be processed in the
Editor:- August 24, 2012 - that quote is part of a
profile of Tom - a mythical
SMART's white paper - is MLC ready for the enterprise? (pdf) -
presented today at the Flash
Tom likes what flash can do to keep things
flowing. His boss likes it too. Tom may get promoted. But too late he learns
that the MLC SSDs
he's been using have different lives in different workloads (4 months in an
exchange server, maybe 1.6 years serving web pages). He starts to learn about
etc - and instead of getting a kick up the management ladder - he risks getting
a kick out the door.
You can guess how this story ends... with a happy
SSDs - based on
DSP flash technology - saves Tom his job - and he no longer has to worry
reliability. The tale is engagingly presented in pictures. ...see
Tom's SSD adventure in the cloud datacenter - (pdf)
Amazon offers explicit SSD performance in the cloud
July 19, 2012 - There are many ways SSDs can be used inside
classic cloud storage
services infrastructure:- to keep things running smoothly (even out
IOPS), reduce running
Web Services recently launched a new high(er) IOPS instance type for
developers who explicitly want to access SSD like performance.
3 to 5 years time all enterprise storage infastucture will be solid state -
but due to economic necessities it will still be segmented into different types
by speed and function - as I described in my
SSD silos article -
even when it's all solid state.
I predict that when that happens -
AWS's marketers may choose to describe its lowest speed storage as "HDD
like" - even when it's SSD - in order to convey to customers what it's
about. It takes a long time for people to let go of old ideas. Remember
Virtual Tape Libraries?
$25 million C round keeps Nirvanix cloud flying
May 3, 2012 - Nirvanix
announced it has raised over $25 million in a Series C funding round -
bringing its total capital raised to $70 million.
thing is Nirvanix say keep self-healing replicas of customer data in
multiple geo-diverse locations - not just static DR copies - which they say
improves the data
Nimbus publishes tick test results
25, 2012 - Nimbus Data
that several key performance and operational characteristics of its
have been validated by Demartek.
- Throughput:- a single Nimbus S-Class 2.5 TB system with a dual-port QDR
connection delivered (near line-rate) 7.6 GBps performance on reads and over
2GBps on parity-protected (RAID
Editor's comments:- it's a complicated
business doing meaningful SSD
tests which can be used as an input into performance modelling. And I've
seen many vendor funded SSD test reports which failed in that respect. But
recently - as the market has got more experienced - some SSD vendors are
changing the emphasis of their sponsored reports to show that their products
can walk and chew gum at the same time. That's the message I pick up from this
Nimbus press release. How much gum? And how brisk the walking pace? It will
suit some users more than others.
- Support of Automatic SSD Enablement, a
feature in vSphere 5 (pdf) that leverages the low latency of flash
technology to improve VMware operations with simplified out-of-the-box
OCZ ships 16TB CloudServ auto caching PCIe SSD
February 14, 2012 - OCZ
imminent shipments of new high capacity
PCIe SSDs optimized
for cloud apps.
R4 CloudServ (which uses 16x
SandForce 2581 SSD
processors) has up to 16TB of storage capacity on a single full height
card and is supported by auto-caching
functionality (based on the acquisition of
VXL) and OCZ's
which together enable host migrations without loss of performance or
interruption of service.
with economic certainty lost in the mist - university data heads
for the clouds
Editor:- November 15, 2011 - the University of Southern California (USC) will
of unstructured data on a private cloud managed by boxes and software from
spokesperson (CTO and Associate Dean of the USC Libraries) Sam Gustman said
"We shifted to the cloud
because it provides USC with a geographically diverse and cost-effective way of
storing, preserving and distributing our content on a truly global scale."
Hybrid Memory Cube will enable Petabyte SSDs
October 7, 2011 - Samsung
and Micron this
week launched a new industry
initiative - the Hybrid
Memory Cube Consortium - which will standardize a new module
architecture for memory chips - enabling greater density, faster bandwidth and
"HMC is unlike anything currently on the radar,"
Feurle, Micron's VP for DRAM Marketing. "HMC brings a new level of
capability to memory that provides exponential performance and efficiency gains
that will redefine the future of memory."
HMC may enable SSD designers to pack 10x more
RAM capacity into the same
space with upto 15x the bandwidth, while using 1/3 the power due
to its integrated power management plane.
The same technology will
enable denser flash SSDs too - if flash is still around in 3 years' time and
hasn't been sucked into the obsolete market slime pit by the
lurking nv demons
which have been shadowing flash for the past 10 years and been waiting for each
"next generation" to stumble and be the last.
management architecture integrated in HMC and the density scaling it allows
for packing memory chips (without heat build-up) are key technology enablers
which were listed as some of the problems the SSD industry needed to solve
in my 2010 article -
this way to the
Pure Storage has amassed $55 million for bulk FC SAN SSD storage
Editor:- August 24, 2011 - Pure Storage
yesterday unveiled its first SSD product line and announced it had received
$30 million in series C funding bringing its total capital funding up to $55
Pure Storage 's
provides bulk / utility SSD storage for
FC SAN enviroments - which
by using inline dedupe and compression - can in some applications (25TB and 50K
IOPS per U) offer lower cost and yet still deliver higher performance than
classic hard drive disk arrays.
Editor's comments:- This
looks like a spreadsheet based value proposition rather than a disruptive new
product - and follows a market groove already established by
and Nimbus Data Systems.
The market for this type of SSD market will be huge - but along the way to
proving itself will have to fight off competition from
auto-tieing SSDs and
white box SSD RAID which
will nibble away at the same customer SSD budgets.
SolidFire launches SSD cloud appliances
June 21, 2011 - SolidFire
has announced details of its first product - an
iSCSI SSD appliance
designed for cloud storage
applications which the company says can scale to 1 petabyte capacity (which
takes 100 nodes with current models).
Performance within a SolidFire
system is virtualized separately from capacity, allowing cloud service providers
to prescribe and guarantee performance to every volume within the system.
comments:- the company's
elements include features such as:- self healing data protection, always on
reservation-less thin provisioning , inline real time compression,
cloning and snapshots,
and dedupe, as
well as adjustable managed
throughput performance windows.
These are the essential
characteristics of what I called "bulk storage SSDs" in my article
roadmap to the
Petabyte SSD - although in that article what I had in mind is that by 2016
that a PB archive SSD library should fit into a single 2U rackmount.
that seems far fetched - remember that a lot of things can change in the SSD
market in 5 years.
5 years ago -
in 2006 - the enterprise server flash SSD market didn't exist. 2006 was
the 1st year of the
market and there were only 36 makers of SSDs - compared to 300 today.
Compliance issues in Cloud Storage
10, 2011 - A recent article in
discusses the use of cloud
other things - the author George
Crump warns that - "The deletion of data from the cloud may be the
most overlooked consideration." ..read
Editor's comments:- - judicious deletion
is also a strategic issue for long established web sites too. GerryMcGovern discussed that in
his classic article -
Business case for deleting content.
the future of data storage in broadcast and IPtv
January 23, 2011 -
future of data storage is the lofty sounding but aptly chosen title of a
new article published online today in Broadcast Engineering -
written by Zsolt
Kerekes editor of StorageSearch.com
It's a completely new article which synthesizes and
integrates concepts from several futuristic articles which have already
appeared here on the mouse site and wraps them into a cohesive whole. Anyone
who reads it will get a clear idea of where the incremental changes they read
about in storage news
pages are likely to end up. ...read
All storage fails - design is choosing management preferences
January 4, 2011 -
Future of Storage in the Cloud is the title of a blog on
DataCenterPOST written by Patrick Baillie,
CEO of CloudSigma (based in
In it he discusses what he calls the "Myth
of the Failure Proof SAN" and his preferences for managing inevitable
Patrick Baillie says "When building out our cloud we
made the decision early that we preferred more frequent low impact problems than
infrequent high impact problems. Essentially we'd rather solve a simple small
problem which occurs more frequently (but still rarely) than a complicated large
problem that occurs less frequently. For this reason we chose not to use
SANs for our storage but
local RAID6 arrays on each
computing node." ...read
Overland says cloud tech can scale NAS VTLs
October 14, 2010 - Overland
that it has acquired MaxiScale
- a cloud storage
Dr. Geoff Barrall, CTO and VP of engineering at Overland
Storage said "The logical next step for us is to create a clustered
scalable NAS forming a
local cloud of storage. When the opportunity arose to acquire MaxiScale's
well-regarded technology, we took notice. MaxiScale's architecture will provide
our customers with the ability to scale hundreds of (our)
into one unified pool of storage."
TwinStrata gets traction with CloudArray software
May 10, 2010 -TwinStrata
announced new customer deployments of its
CloudArray software - which
delivers cloud storage
functions (such as data replication,
archiving and DR) piped through an
TwinStrata says its software supports all market-leading
hypervisors: VMware ESX/ESXi, Citrix XenServer, and Microsoft Hyper-V.
StorSimple fills "missing link" in cloud storage DNA
Editor:- May 4, 2010 - StorSimple has
exited stealth mode - announcing a bunch of collaborative customer supply
agreements - and disclosing info about its Armada storage appliance - which is
designed to reduce the cost and simplify the integration of
cloud storage within
datacenter applications and infrastructure.
Just as application specific SSDs
are the future for the SSD
market - StorSimple's Armada system can be regarded as an application
specific SSD ASAP
which includes features such as real-time
dedupe and cloud
The simplest way to think about it is as "the
missing link" between the promise of cloud storage and its practicality.
The companies which have agreed to be named in StorSimple's company launch press
release (Amazon, AT&T, EMC, Iron Mountain, and Microsoft) seem to think it's
a noteworthy part of cloud storage DNA too.
Digitiliti Launches Virtual Corporate Library
March 22, 2010 - Digitiliti
availability of its
a multi-functional continuous VTL,
compression, ediscovery appliance which automatically captures and archives
new data from the time it is created and
sanitizes it at
the end of its policy mandated life.
Pricing starts at about $20,000
for a 3TB information director and $3 per GB archived after dedupe and
compression, plus $100 per client.
New Image for Cloud Storage
Editor:- January 13, 2010
- a new article on
discusses on-low cost and no-cost cloud storage offerings from Google.
author David Coursey (and his commenting readers) make some interesting
comparisons with Microsoft 's SkyDrive.
I loathe the term "Cloud Storage". But I have to admit we're stuck
with it. So today I changed the graphic on the
online backup and storage
The old one - with the tag about "Spellerbyte was
cooking up a new business plan which involved online web backup" - was
appropriate when it was first published 10 years ago - but no longer fits this
market's image today. I resisted the temptation to use an image compatible with
the business metaphor of "sad losers" or "big black hole for
Systemic Risk with "Cloud Think"
June 4, 2009 - Burton
Group today published an article called -
and Systemic Risk.
The author Jack Santos
says he thinks "clouds" are at a peak hype stage and ready for a big
|Spellerbyte was surfing the more
nebulous regions of the storage market.
back at the online backup and storage market|
by Zsolt Kerekes,
|This market has seen
many ups and downs in the past decade. The online backup market flared most
brightly at the height of the dotcom boom crazy days in the late 1990s. That
convinced me to create a dedicated page for this subject. You can see an
archived copy of the online backup page circa 2000 -
Back then - I called it "Edrives & web based storage" - because "online
backup" hadn't yet become a standard term back then.|
unconvinced about the business models for many of these companies - which mostly
relied on unsustainable web advertising. I'd been making my living from the
sustainable kind (of advertising) - and knew the difference.
enough - this segment of the storage market got itself a bad reputation for
vendor churn and undependability in the long term.
You can get a
flavor of how the online backup industry changed (and our web site too) in the
years which followed, by clicking these archived links:-
we're recently experienced another recession (caused by the credit crunch of
2008) and you've got to ask yourself this question...
If banks can
fail - then why should you trust ANY online backup provider with your data?
answer is - you shouldn't. Because
shown these services can disappear overnight.
But on the other hand -
there are many examples of where online backup has helped their customers
survive in the event of floods, fire etc.
A pragmatic approach - would
be to use 2 different types of offsite backup - which do not have common modes
of failure due to sharing software or geography. That's the way ahead for this