| VMware enters the SSD
March 6, 2014 - With the launch of its
Virtual SAN - VMware has at last
joined the crowding SSD
software ecosystem as a lead SSD player rather than (as before) in a
subordinate role (as the
dancing partner - a bit like dancing with your uncle or aunt at the wedding
disco) which was the case before in
of acceleration compatibility stories narrated by other SSD companies.
version 1.0 is an SSD
ASAP (hybrid virtualizing appliance) - which supports 3-8 server nodes. The
company says that "support for more than 8 will come later." ...read the
Editor's comments:- first impressions? It's
late and doesn't look great (in features). But it will probably be deemed
adequate for many users starting down this road.
Before dismissing it
entirely (as some commentators and competitors have already done) let's
remember that when LSI
entered the SSD market in
January 2010 -
it was the "163rd company to enter the SSD market". And look
where they are now.
late to market doesn't count as a mortal sin in the SSD marketing lexicon
right now because
mover advantage (pdf) assumptions aren't valid in this phase of the
more comments re VSAN
customers who had the opportunity to participate in the VSAN beta told us that
in most cases, (our) Maxta MxSP performs better" - said competitor Yoram Novick,
founder Maxta in his blog
Storage the Devil is in the Details
especially proud of how the team has outperformed expectations. Today were
announcing GA support for 32 nodes. That means that Virtual SAN can now
scale from a modest 3 node remote office, to a multi-petabyte, mega-IOPS monster
just by adding more server resources... and ...VSAN isnt bolted on,
its built in." - says Ben Fathi, CTO
VMware - in his blog -
SAN: Powerfully Simple and Simply Powerful
another $13 million for Primary Data
February 17, 2014 - A
in SiliconAngle.com - shared from the linkedin page of Primary Data's CMO
- Rick White -
says that Primary Data (which is still in stealth mode) has secured another
$13 million funding - bringing its total funding up to $63 million.
comments:- there's a lot of speculation about what this new company is
Primary Data's founders changed the enterprise server market
with their previous startup
Fusion-io (founded in
Before Fusion-io - server makers didn't want to talk about
After Fusion-io - no server maker could launch a new
enterprise server product line without including SSD acceleration as a standard
option. (Because Fusion-io signed up most of the key server oems as "PCIe SSD inside"
which made all server makers
hostage to the
fortunes of SSD).
When Primary Data launches its products later
this year - what will its products look like?
From the hints dropped
so far - it seems that
Primary Data will
aim to shake up the enterprise architecture world with a new platform which
leverages SSD enhanced servers as the worker ants in a new software scheme
which spans everything from the local cluster to the
also:- VCs and SSDs,
enterprise SSD silos,
Atlantis provides more evidence of the trend towards massively
improved enterprise utilization enabled by SSD-aware software
February 11, 2014 - Atlantis
that the new "In-Memory Storage Technology" release of its storage
virtualization software - called
Atlantis ILIO USX -
can significantly increase enterprise utilization by enabling users to deploy
up to 5x more VMs on their existing storage.
USX faqs (pdf),
utilization and the SSD event horizon,
old software will slow new silicon in memory done by SSDs
February 5, 2014 - In a new blog -
Vistas For Persistent Memory - Tom Coughlin,
Associates reminds us that in exteremely fast SSDs - lowering the
hardware latency is just one part of the design solution.
Tom says -
"An important element in using persistent memory in the PCIe and memory
bus of computers is the creation of software programs that take advantage of the
speed and low latency of nonvolatile memory. With the increase in performance
that new interfaces allow, software built around slower storage technologies
becomes a significant issue preventing getting the full performance from a
persistent memory system."
Tom's article includes a graph which
shows the increasing proportion of the read access time taken up by system
software in successively faster hardware interface generations. ...read
Editor's comments:- living with the old
while planning for a new type of SSD-aware computer architecture is
Just how complicated that picture can be... you may
glimpse in a classic far reaching paper (about abstracting application
transactional semantics in usefully different ways when viewed from their
interactions with the flash translation layer) - called
Operations via the Flash Translation Layer (pdf) by Gary Orenstein,
SNIA proposes new standard for virtualizing SSD implemented
Editor:- January 27, 2014 - It's years since the first
SSD software horses
were seen to be leaving the stables - but last week - a
standards ORG - SNIA
an effort to bolt these doors with the release of version 1 of what it hopes
will be a new standard called the
Programming Model (pdf)
Editor's comments:- Currently if
you use SSDs as memory using
PCIe SSDs from
Virident, or if you
plan to use memory channel SSDs from
SanDisk - then you're
potentially looking at working in 3 different software environments.
viable permutations of hardware and software compatibility levels shrink for
users when they converge at a popular market application level such as
virtual desktops - but explode into crazy unsupportability for 3rd party
software developers as they try to step back from proprietary APIs and hang
onto more general hooks in operating systems which were never designed around
the core class of capabilities offered by low latency SSDs.
the long term solution to the
state of ad hoc SSD software lies in adapting current OS's - or maybe in
bypassing old OS's entirely and starting again with cloud level service-like
abstrations in virtualized servers - is interesting to speculate.
In the meantime software developers have to work with existing de-facto software
environments (to generate revenue) and also keep an eye on future standards in
the hope that standardization will reduce their costs (one day in the remote
The SSD software platform and the optimum level of
engagement for vendors is a lottery which will suck billions more dollars from
VCs before it is resolved. And I think that market dominance will be a bigger
part of the solution than a set of committee based standards.
get ready for a world in which all enterprise data touches SSDs
January 8, 2014 - StorageSearch.com
today published a new article -
get ready for a
world in which all enterprise data touches SSDs.
in SSD software could be as important for infrastructure as Microsoft was for
PCs, or Oracle was for databases, or Google was for online search." ...read the article
10,000 sites use DataCore
Editor:- December 16, 2013
that over 10,000 customer sites have used its software.
stance re enterprise
SSD architecture is that most users can (for the time being) resist the
siren calls of SSD makers towards
all flash enterprise
storage - because "only 5% of workloads require top tier performance.
And businesses have turned to
software to make sure applications are sharing flash and spinning disk,
based on the need to optimize performance and investment."
the big SSD idea changes in 2013?
10, 2013 - I've tried to leave my year end review article of the SSD market as
late as possible - because a lot of things happen in December too. But now on
the home page of StorageSearch.com
- you can see my new blog -
big SSD idea changes in 2013? As we approach another year of SSD disruption
in 2014 - what were the big SSD idea changes in 2013? And where does software
fit into the picture? ...click
to read the article
Maxta joins the elite set of enterprise contenders who are
vying to own the next generation SSD-centric platform
November 13, 2013 - This week Maxta completed its
staged emergence from stealth mode and
its first product - the Maxta Storage
Platform - a hypervisor-agnostic software platform for repurposing arrays
of standard servers (populated with cheap standard
SATA SSDs and
hard drives) into
scalable enterprise class apps servers in which the global CPU and storage
assets become available as an easily managed meta resource with optimized
performance, cost and resilience.
Editor's comments:- I spoke
last week to Yoram
Novick about this new product, his company and what customers have been
doing with it.
Before you dip into my bullet points below - here's a
header note of orientation.
We've all seen new companies launching
SSD software and pitching for the enterprise with products which are little
more than spruced up versions of "hello SSD world!"
year later - some essential compatibility features get added, and later still
some degree of better or worse
It didn't used to matter much if everything wasn't in place at the start - or
if these new companies didn't have sustainable business plans - because there
was an appetite for acquiring
From my perspective I'd say that many companies have
regarded the launch of their SSD software is simply an invitation to attract
users who could provide the market knowledge they needed to flesh out the
In these important respects Maxta is different because:-
prior to this week's product launch they've already had a group of 10 or so
advanced customers in different industries who have been using the product and
also the enterprise features - like manage-ability, scalability, resilience and
data integrity are already in the product today.
Maxta's technology and
business architects have done enterprise storage software before - as you can
see from their linkedin
bios. Yoram told me that he and
Amar Rao (Maxta's VP of
Business Development ) used to compete with each other in earlier storage
startups and the companies which had acquired them.
So it soon became
clear to me in the details I saw and asked about (not all of which are listed
here) that a lot of careful planning and up front thinking and problem solving
has already guided the "launch".
Here's some of what I
- market scope
MxSP is the software glue for enabling easily managed
SSD enhanced storage
pools in VM environments which scale from the ROBO upto
base level configuration which provides HA features starts as low as 3 nodes.
This is attractive for enterprises with remote offices because it's a small
footprint. But it's also attractive from a running cost point of view too -
Yoram said because of the special low price point for associated software.
has a customer who started with these 3 node configurations for remote offices
but liked them so much that their bigger arrays are now built mostly from
arrays of 3 too.
- the problem it solves
The evolution of enterprise CPU and storage
resources have followed different tracks in the past decade - leaving users in
the position today where it's easy and economic to deploy more CPUs but
relatively awkward, expensive or error prone to map these CPU resources into
virtual storage which scales with the same ease and which takes advantage of
the low cost and high performance of commodity enterprise SSDs.
- the storage pool
Maxta's architecture aggregates the SSDs and HDDs
in the server pool into a single globally accessible, fault tolerant SSD
accelerated virtual storage pool.
Within Maxta's software - all the
SSDs are collected together as 1 super SSD resource and another big resource
is created from the HDDs.
Internally Maxta's software knows that SATA
SSDs and SATA HDDs have different personalities for example:-
- HDDs have low cost per unit of capacity but slow random read latency
- SSDs have fast random read, and fast sequential write
every node in the array has to have an SSD or HDD inside but it's not sensible
to have a system which doesn't have any SSDs at all.
- fault tolerance, data integrity, VM snapshots, cloning etc
they're all in the product now.
- software? - it's a virtual world view
Everything about MxSP is
virtual. And it doesn't require new management tools. The operational aspects
will clarify in customer case studies and white papers.
- Maxta's business plan
I told Yoram how disillusioned I had become
about the sustainability and viability of new storage software companies -
given my experience of having tracked over 1,000 storage companies and
terminating the list of
gone-away and acquired companies in a single decade at the 500 company
level. (That's before I started the gone-away SSD companies list BTW - which is
well on its way to 100.)
Jaundiced by that experience it seems to me
that over 95% of storage software startups don't have much of a clue about how
to translate their IP assets into any sustainable business value and are mostly
founded at the outset with the fervent desire that before the VC and IPO
money run out - they will get acquired. So I asked him if Maxta would be any
different to that?
Yoram told me some of what Maxta has been doing in
laying the foundations for growing the business to become a significant storage
platform (in his words) a significant software company like Microsoft or
I won't say more here because this is too long already -
despite having not even mentioned most of the notes I made during our
Looking back on this nearly a week later (and having seen
some of their documents before) I'm left with the impression that maybe indeed
Yoram is right and his company could become not only one of the rare storage
software companies which are sustainable as a business. But going further than
that - maybe too it has the makings of a company which could be one of the
five to ten companies which will dominate the SSD software platform market of
Who are the other contenders?
I've given you
lists before - but this list is evolving because 4 of the 10 companies were
still in stealth mode last time I did that.
If you're interested in
the SSD enhanced storage platform idea (and who wouldn't be) then another good
place to look is the list of competitors which I've compiled in
Maxta's profile page.
new blog by PernixData describes the intermediate states of play
for its HA clustered write acceleration SSD cache
November 5, 2013 - In a clustered,
SSD ASAP VM
environment which supports both read and write acceleration it's essential to
know the detailed policies of any products you're considering - to see if the
consequences - on data vulnerability and performance comply with strategies
which are acceptable for your own intended uses.
In a new blog -
Tolerant Write Acceleration by Frank Denneman
Technology Evangelist at PernixData
describes in a rarely seen level of detail the various states which his
company's FVP goes through when it recognizes that a fault has occured in
either server or flash. And the blog describes the temporary consequences - such
as loss of acceleration - which occur until replacement hardware is pulled in
and configured automatically by the system software.
Stating the design
principles of this product - Frank Denneman says - "Data loss needs to be
avoided at all times, therefore the FVP platform is designed from the ground up
to provide data consistency and availability. By replicating write data to
neighboring flash devices data loss caused by host or component failure is
prevented. Due to the clustered nature of the platform FVP is capable to keep
the state between the write data on the source and replica hosts consistent and
reduce the required space to a minimum without taxing the network connection too
ASAPs - auto tiering / caching appliances
OCZ has a web browser based tool for managing the health of SSDs
installed in your networks
Editor:- October 29, 2013 - OCZ today
the availability for immediate download of a new enterprise
management tool which leverages the internal SMART log files and controllers in
OCZ's enterprise SSDs installed on various hosts and operating systems in
the customer's connected networks.
"SSDs have become a critical
component of the modern data center and IT managers expect enterprise-tools that
optimally manage and maintain them. Our StoragePro XL management system is
designed to centrally manage our complete portfolio of enterprise drives
covering SATA, SAS and PCIe and does so in a very easy and non-obtrusive manner"
said Dr. Allon Cohen
VP of Software and Solutions for OCZ. "This level of remote host and SSD
management provides the system information and SSD health that IT professionals
- provides a structured group-based view of host and SSD activity throughout
the data center
- enables customisable alerts triggered by parameters in SMART log data
- simplifies SSD installation - such as firmware updates
Editor's comments:- the
simplest way to get what all this is about is to click on the
XL product page which shows various screenshots.
- can generate SSD maintenance reports - such as raw read error rate,
wear-out stats, and other usage data
the SSD reliability
SSD testing &
seniors live longer in my SSD care home
McObject shows in-memory database resilience in NVDIMM
October 9, 2013 - what happens if you pull out the power plug during
intensive in-memory database transactions? For those who don't want to rely on
batteries - but who also need ultimate speed - this is more than just an
Recently on these pages I've been talking a lot
about a new type of
SSDs which are hoping to break into the application space owned by
PCIe SSDs. But another
solution in this area has always been DRAM with power fail features which save
data to flash in the event of
loss. (The only disadvantages being that the memory density and cost are
constrained by the nature of DRAM.)
products include in-memory database software) yesterday
published the results of
benchmarks using AGIGA
Tech's NVDIMM in which
they did some unthinkable things which you would never wish to try out for
yourself - like rebooting the server while it was running... The result?
Everything was OK.
"The idea that there must be a tradeoff
between performance and persistence/durability has become so ingrained in the
database field that it is rarely questioned. This test shows that mission
critical applications needn't accept latency as the price for recoverability.
Developers working in a variety of application categories will view this as a
breakthrough" said Steve Graves,
Here's a quote from the whitepaper -
Persistence, Without The Performance Penalty (pdf) - "In these tests
eXtremeDB's inserts and updates with AGIGA's NVDIMM for main memory storage
were 2x as fast as using the same IMDS with transaction logging, and
approximately 5x faster for database updates (and this with the
transaction log stored on RAM-disk, a solution that is (even) faster than
storing the log on an SSD). The possibility of gaining so much speed while
giving up nothing in terms of data durability or recoverability makes the IMDS
with NVDIMM combination impossible to ignore in many application categories,
including capital markets, telecom/networking, aerospace and industrial
Editor's comments:- last year McObject
published a paper showing the benefits of using PCIe SSDs for the transaction
log too. They seem to have all angles covered for mission critical ultrafast
databases that can be squeezed into memory.
the SSD software event horizon
Editor:- October 8,
2013 - Ever wondered about the awesome market power of software? It's not just
servers and hard drive arrays which have utilization rates.
Meet Ken and the
enterprise SSD software event horizon - the (long anticipated) new home
page blog. ...read
Permabit has shrunk data storage market by $300 million
Editor:- September 30, 2013 - Permabit today
and hard disk customers have shipped more than 1,000 arrays running its
RAID) software in the past 6 months.
"We estimate that our
partners have delivered an astonishing $300 million in data efficiency savings
to their customers" said Tom Cook, CEO of Permabit
who anticipates license shipments to double in the next 6 months.
efficiency, new RAID in
Diablo readies new SSD interface for VMware ecosystem
September 17, 2013 - Diablo
announced it has
technology alliance program.
Proximal Data announces AutoCache version 2
August 26, 2013 - Proximal
the release of version 2.0 of
AutoCache (SSD ASAP software ).
Pricing starts at $999 per host for flash caches less than 500GB. The company
has been demonstrating the new version working with
PCIe SSDs from
Micron at VMworld.
Enmotus demos FuzeDrive hybrid array software
August 13, 2013 - Enmotus
announced that it
is demonstrating its FuzeDrive
(hybrid SSD ASAP)
solutions (with Toshiba
SSDs inside) at the Flash
"While helping accelerate early adoption
of SSDs, today's caching solutions don't always provide the results users
expect. FuzeDrive avoids using traditional caching techniques, and instead
borrows its concepts from intelligent real time virtualization, data movement
and storage pooling techniques typically found in larger 'big iron' enterprise
systems," said Andy
Mills, CEO and Co-founder of Enmotus.
how new SSD software gets things done faster
July 29, 2013 - "One of the ironies of legacy systems software running in
flash systems is the way that the data weaves through layers of fossilized
unreality where emulation is stacked on emulation." - from the news page
Atomic Writes, and a faster way
for the Princess to get her shoes
EMC's acquisition of ScaleIO hints at an SSD server
afterlife for legacy SANs
Editor:- July 16, 2013 - EMC recently
it has agreed to acquire another storage software company - called ScaleIO.
EMC indicated that
ScaleIO's software - which emulates the capabilities of virtual SAN style
storage within the physical implementation of pools of server attached DAS
- makes it easier for users to manage expanding data volumes and reduces the
need for performance planning. The new software will be applied to extend the
application functionality of EMC's
PCIe SSD product lines
and XtremIO rack based flash systems.
Editor's comments:- One
way to view this is it will give EMC similar capabilities to
Nutanix. Or another is
that the EMC/ScaleIO solution (if and when it's done) can be seen as a shot
back across the bows aimed at
software. (You came into our market space - so we're coming into yours.)
a step back however, and it doesn't have to be so personal.
systems have shapes and architectures which date back to a command and control
SAN style architecture
dating back to the 1990s.
If you were trying to solve the same data
processing and content management functions from a clean sheet start today -
you'd probably go for a more "democratic" Google style architecture -
in which most racks in the datacenter are similar - and their function is
defined and can be changed by software - rather than being hardwired by the
description of the box at the time it was invoiced.
It's long been
known that SSD acceleration lets you speed up legacy architectures - but SSD
performance also gives you the freedom to emulate entire applications
environments on cheaper, and more
HGST catches VeloBit
Editor:- July 10, 2013 - For the
past 15 years from what I've seen - the ultimate business aim of most storage
software companies has been - to get
been even more true in the SSD
software market - wherein frankly - most companies don't even pretend to
invest in sustainable business models.
In the past 2 years - an SSD
software company has been
acquired every 2
months (on average) and the latest company sustaining that trend is VeloBit which has been
acquired by WD
for deployment by its subsidiary HGST - it was announced
In case you've forgotten why this trend started - software
makes it easier to sell more SSDs and the ROI from a vendor's point of view is
better than doubling the sales force. That's why valuations (not disclosed in
this case yet) have been so disconnected from the financial outlook of the ISV's
themselves. See also:-
Software is the reason enterprise SSD users are talking to
Editor:- June 19, 2013 - SanDisk recently
a new version - 3.2 - of its
FlashSoft (SSD caching software)
for Windows Server ($3,000), and Linux ($3,500). New in this release is
support with low latency SSD mirroring for "safe write-back"
caching. Improvements include:- larger cache sizes upto 2TB per cache and upto
8TB cache per server. Also the number of volumes supported by a single cache has
increased from 255 to 2048.
Editor's comments:- Many
enterprise SSD users - who wouldn't dream of approaching SanDisk to use its raw
SSDs in their enterprise projects - are more than willing to use their
enterprise SSD software and share their ideas about their enterprise SSD
problems and their experiences.
Can SanDisk really transform itself
into an enterprise SSD heavyweight? - See the new article and analysis in
FIO's ION software in HP boxes enables Breakthrough Shared
Editor:- June 13, 2013 - The performance of
Accelerator software - which you can add to its PCIe SSD cards, any
standard server and some FC adapters to roll your own SAN
rackmount SSD -
is the point of a new
blog by the company
today which celebrates recent benchmarks for
2, 4 and 8
processor HP server configuartions (pdf).
exciting new directions in rackmount SSDs
May 29, 2013 - A new generation of enterprise SSD rackmounts is breaking all
the rules which previously constrained price, performance and reliability.
The new maths of this SSD box trend - with software in the soul of the SSD
- are explored in my recent home page blog on StorageSearch.com
directions in rackmount SSDs. ...read the
Stec's profiler removes guesswork in sizing SSD caches
May 21, 2013 - Stec
that it's offering a free profiling tool -
Profiler - which can enable users to determine how much benefit they would
get from using its
(SSD caching software) - before they even install any SSDs.
company says that the "non-disruptive installation" can save hours of
administrative trial and error by recommending the optimal block size, and the
capacity and type of SSDs to be used for maximum performance gain.
OCZ gets award for Windows compatible SQL flash cache
May 8, 2013 - OCZ
that its ZD-XL SQL Accelerator earned the
Best of Interop
award in the data center and storage category.
at CeBIT last February) is a bundled package for Windows servers which
includes an SQL optimized flash caching software appliance which leverages
the low latency of an associated
OCZ PCIe SSD card.
judging committee, comprised of 16 IT editors and analysts who reviewed
nearly 150 entries. See also:-
Do you have impure thoughts about deduping SSDs?
March 28, 2013 - What comes to your mind when you think about
theoretical ratio? - x2, x5, x10...
Or maybe you groan? - It's too
messy to manage and even if capacity gets better, something else gets worse
- so let's just forget the idea...
A new blog -
the SSD Dedupe Ticker - by Pure Storage
looks at the state of customer reaility in this aspect of SSD array
technology and comments on the variations you can get according to the type of
app and the way of doing the dedupe.
Among other things the article
also looks at the biggie question - of performance impact - answering the
author's rhetorical question - "why hasn't deduplication taken the primary
storage world by storm like it has the
backup world?" ...read
Nimbus brings flash SMART plus stats to SSD rackmounts
March 25, 2013 - Nimbus
Data Systems today
new software APIs which support its proprietary
HALO OS based family
of rackmount SSDs
- and report on hundreds of real-time and historical metrics such as:-
flash endurance, capacity utilization, latency, power consumption, deduplication
rates, and overall system health. Another new feature is that sys admins can
monitor their Nimbus
SSD arrays via new apps on Android / Apple phones and tablets.
CEO and founder of Nimbus Data said the new software framework would enable
cloud architects and enterprise customers to gain greater insight into their
flash storage by viewing internal aspects of their flash storage which
mattered to them - rather than simply relying on benchmark indicators which
have been cherry picked by vendors or reviewers
another $24 million funding for ZFS SSD ASAP ISV Nexenta
February 27, 2013 - Nexenta
it has secured $24 million in Series D financing.
SSD ASAP software -
- currently supports SSDs from the following companies:-
STEC - according to
hardware support list (pdf).
Virident betas remote PCIe SSD sharing
February 21, 2013 - Virident
beta availability of a new software suite - called FlashMAX Connect - which
enables low latency shared server-side storage and
when used with the
company's range of PCIe SSDs.
New functionality includes:-
- fast / low-latency synchronous mirroring that replicates writes from one
server to another, providing storage node or server failover without affecting
application and data availability.
- shared storage management in remote PCIe SSDs. This allows customers to
share the storage residing on remote servers and thereby scale PCIe flash
capacity independent of compute. For example - a single PCIe flash card can
service multiple servers.
Editor's comments:- it's long been known
within the SSD industry that these features have been in the pipeline - because
they're based on support at the PCIe switch chip level.
- Easily managed controllability of cache policies within installed PCIe
SSDs:- write-back, write-through and write-around cache so that users can
choose cache modes which provide better fit to their performance and
overview of this architecture enabling chip level support and how it offers
flexibility in servers and SSDs - take a look at this video -
enterprise SSD designs by
Software - a new reason to reconsider Intel's server SSDs
February 13, 2013 - Intel
that in the next 30 days it will ship a Linux version of the SSD caching
software - based on IP from its acquisition of
NEVEX last August. The
products have been rebranded as
CAS (Cache Acceleration Software).
Editor's comments:- I
would categorize Intel's current generation of enterprise SSD solutions
(which includes the same old indifferent SSDs working with the new CAS software)
as being in the medium to fast-enough performance range.
customers might be end users who have never used SSD acceleration before - or
users with apps which don't need the higher speeds offered by competing SSD
bundled drive / module packages from
OCZ - and customers who
don't want to do their caching via dedicated rackmount based products from
the dozens of other vendors listed in the
SSD ASAPs directory.
market segment addressed by these new Intel products is the early
majority of enterprise SSD adopters - who will be reassured by the
perceived safety of buying into the dangerous world of solid state storage
acceleration from a value based brand.
I spoke about the new CAS
software to Intel product manager Andrew Flint
who cofounded NEVEX and I
learned some useful things about the product.
The first question I
asked was - how many PCIe SSDs can the CAS product support in a single server?
And were there any graphs showing how performance drops off or is maintained
when you do that.
The answer was - this info isn't publicly available
right now. Although it may be in the future.
That's when I concluded
that Intel CAS (married to current generation Intel SSDs) isn't a fast
product - and is not in the kind of performance league where a user would
seriously worry about this type of
Intel's ideal end-user customers right now for CAS are
people who have been using no SSD acceleration at all coupled with hard drive
arrays. That performance
silo could change - with faster Intel SSDs in the future - and isn't due
to limiting characteristics in the software.
I asked - Does it support
3rd party SSDs?
I was told - the standard release only supports Intel
SSDs. But there's nothing in principle to prevent it being used with other SSDs
using the open source release of the software.
The product is a read
cache. I was told that it makes very good use of whatever RAM is in the server
to optimize both read and write performance. However, my view is that as Intel
SSDs aren't fast - this is somewhat academic.
I asked about the time
constants which are analyzed by the caching software - and learned that -
depending on the app - the data usage period which is analyzed goes up to days.
(Generally in this type of product longer is better - and when you go up from
milli-seconds and seconds to minutes, hours and days - you have the potential to
get better caching results.)
I learned that Intel CAS isn't written
around the data structure or interface - and is hardware agnostic. Users can
tell the software which apps they want to cache - via a control panel. This is
very useful in environments where a single server is running a mix of apps -
some of which are critical (in performance needs) while others are not.
asked - does the CAS have to have advance knowledge of the app? - Is it
optimized for a preset list of apps?
I was told - No. It will work
just as well for - what I called - dark matter software- which might be a
proprietary app which no one else knew about.
I asked if Intel collects
stats from the general population of installed servers which use the software? -
in order to improve tuning algorithms...
I was told - No. The
optimizations (data eviction probability rates) are done based on what is
learned on the customer's own server and private data - and the factory shipped
software. There isn't a wider intelligence learning or gathering or snooping
I learned that a special feature of this Intel CAS release
is the ability to share cache resources with a remote SSD. The data stays hot
and doesn't have to be recreated when different virtual machines are accessing
this type of resource.
Overall I came away with a good impression of
the CAS software and how well the NEVEX technology idea has been assimilated
into Intel's SSD business.
It will undoubtedly help Intel sell more
SSDs to people who have never used enterprise SSDs before - and maybe also to
people with low end apps who have used SSD acceleration before but whose
first choice of SSDs wouldn't otherwise have been Intel.
aligning database block sizes with SSDs
February 5, 2013 - I was only saying to someone yesterday that I've had
emails from readers who are designing
software for SSDs who
- having researched the subject of
flash etc - then spent
too much time over-worrying about internal SSD hardware details that they
really shouldn't be worrying about - because by the time they learn about it -
that type of hardware issue is ancient history.
By a curious
coincidence today I came across a recent blog by Chas. Dye at Pure Storage
DON'T Fiddle with Your Database Block Size! - which also warns about this
Chas says - "At Pure Storage, we believe that a factor
that should never influence the block size decision is your storage subsystem."
comments:- I'd certainly agree that trying to slavishly make your data
structures look like something you've read about which might be inside an
SSD controller is
probably a waste of time - because unless you know the SSD designer you don't
really know what's going on - and the abstraction you read about in some web
site is only a small part of the picture. If an
SSD is so sensitive to
the data you hit it with - it's not the SSD you should have bought in the
Violin acquires GridIron
Editor:- January 21, 2013 - Violin today
it has acquired GridIron
Editor's comments:- in
October 2012 I
listed GridIron as 1 of the 3 main contenders to
Fusion-io in the
enterprise SSD software
stakes -with the qualifying comment...
"GridIron - probably has
the most sophisticated SSD
ASAP software in the industry. But it's a shame it has been tied (until
recently) to their hardware - an SSD HDD hybrid box."
announcement - which adds to the growing list of
acquisitions in the modern era of the SSD market - will enable Violin to
strengthen its already established authority in the enterprise SSD rack market.
Virident's PCIe SSDs VMware Ready
14, 2013 - Virident
that its FlashMAX II
family (PCIe SSDs)
has achieved VMware Ready status.
Samsung acquires an SSD software company
December 15, 2012 - Samsung
acquired an SSD
software company - NVELO
which operates in the SSD
ASAPs (caching) market.
IOPS / $ as a goodness metric for enterprise SSDs is bad
December 5, 2012 - The
cost of SSDs is one
of the arguments most often cited by antis to explain why (in their view)
the transition to a pure SSD storage market can't happen.
the designers of the first ships made from iron (which unlike wood doesn't
float) and the first airplanes (which were heavier than air) must've got used to
hearing similar objections. ...more in SSD news
Enmotus demos its SSD ASAP technology
November 27, 2012 - Enmotus
is demonstrating its auto-tiering software - which it calls
MicroTiering technology (pdf) - for the first time in public this week at
the Server Design Summit.
in memory database is even better with FIO's flash SSDs
November 19, 2012 - McObject
announced that it has run
benchmarks of its (intrinsically
designed for) in-memory database systems software - with transaction logging
enabled - on a number of different devices - and in particular Fusion-io's ioDrive
Editor's comments:- In a paper published 3 years ago
- In-Memory Database Systems:
Myths & Facts - McObject said that fast flash SSDs used as the
storage hot spot for traditional database software could never get performance
as good as their own in-memory solution running in DRAM with legacy hard drive
array bulk storage - and various remarks in that paper sent out a strong
anti-SSD message which the company is in effect correcting today.
McObject is now saying - is that by using a fast low latency SSD for the "performance
draining" transaction log - you can get even greater speedups. There
are other benefits too - which arise from the efficiency of their small
footprint database - which means that a software product - which was originally
designed for the DRAM-HDD world - is a good fit in the flash SSD world too - if
you have the right scale of data and the right SSD.
McObject's Marketing Director Ted
Kenney emailed me to clarify a couple of points about their product
and my interpretation of their business thinking. Here's some of what he said.
would point out one thing about your blog post, just to clarify from McObject's
point of view.
You mention the Myths & Facts white paper, specifically where we
argue (Myth 3, I believe) that an IMDS will always be faster than an on-disk
DBMS that uses an SSD to store records.
Keep in mind that that
paper's comparison does not touch on transaction logging. At least, transaction
logging is not mentioned; the assumption (our assumption, in writing it) is that
the comparison is between a "pure" IMDS (all data kept in main memory
and nothing stored to persistent media), and an on-disk DBMS that stores records
on a SSD. Our conclusion was that while the DBMS storing to SSD is likely faster
than a DBMS storing to HDD, it still can't touch the performance of a pure IMDS.
In contrast, our recent comparison (the subject of the press
release I sent you) is focused differently: it presumes that the user wants data
durability and recoverability. That rules out use of the pure IMDS (because RAM
storage is volatile), so we instead look at solutions that deliver
recoverability/durability, specifically an on-disk DBMS storing records to
persistent media vs. an IMDS with transaction logging (let's call that IMDS+TL).
Then for the IMDS+TL we measure performance using different storage devices:
HDD, SSD and ioDrive.
The result: an IMDS+TL storing its log on HDD beats the performance
of a DBMS storing records to HDD (by about 3x). If you then "upgrade"
the device on which the IMDS+TL stores its transaction log, the performance
difference (compared to DBMS+HDD) is even greater (as much as 15x when
using the ioDrive). But the recent round of testing did not look at
the "pure" IMDS performance. If it had, the pure IMDS would have beat
the IMDS+TL using any of the devices to store its transaction log.
We hadn't considered that our message in the earlier white paper was "anti-SSD"
or that we were now correcting that message. Instead, we'd say that the earlier
paper looked at a scenario in whichperformance is the highest goal (the
only goal mentioned, anyway) whereas the new tests focused on performance, with
durability/recoverability as an additional requirement.
comment - "It seems that a software product which was originally
designed for the DRAM-HDD world is a good fit in the flash SSD world, too
if you have the right scale of data and the right SSD." - Actually
eXtremeDB was designed for the DRAM world initially (in-memory only). When we
later added support for persistent storage (first with transaction logging,
later with optional persistent storage for selected record types) we were (and
still are) agnostic: eXtremeDB does not recognize or care about the type of
persistent media used.
Again thanks for taking the time to
look at our news and at our various statements vis-à-vis flash, storage
and performance. It sounds like you understand our technology and the issues
involved. I just wanted to point out that the white paper's discussion, and this
recent press release, take slight different perspectives on what the
developer/end-user is trying to accomplish.
OCZ's new VXL software release includes fault tolerant support
for arrays of PCIe SSDs
Editor:- October 23, 2012 - OCZ today
a new version (1.2 ) of its
cache and virtualization software - which provides high availability,
synchonous replication and enhanced VM performance across arrays of the
company's Z-Drive R4 PCIe
The company says this assures that host-based flash is
treated as a continuously available storage resource across virtualized clusters
and yields no data loss and no VM downtime even during complete server
"By combining the power of storage virtualization and
PCIe flash caching, and by working centrally with the hypervisor rather than
with each local VM, we have developed a solution that takes full advantage of
flash without losing any of the benefits associated with virtualization,"
said Dr. Allon Cohen,
VP of Software and Solutions, OCZ. "VXL's ability to transparently
distribute flash resources across virtualized environments provides IT
professionals with a simple to implement solution..."
in the SSD software golf challenge who's got a similar handicap
Editor:- October 2, 2012 - last week I was asked
by a reader (who didn't want to be named here) if I could suggest any
companies which have SSD software as powerful and far reaching as that of Fusion-io.
thought it would be much too simplistic to answer with a list of names taken
out of context - so instead I said there are several different levels at
which you can view and analyze this:-
- communicating intelligence between the API and raw flash level
- working between different storage systems and software components (caching,
tiering, virtualization, data protection etc)
After using a lot more words in my email than I've
used here - the end result was a reply to my reader with a list of companies
which you wouldn't be too surprised to see if you looked at the list of top
enterprise SSD companies and correlated that with who's acquired or developed
their own software. The list ran something like this:-
- working in different markets -
consumer? - you ask - I thought we were talking about Fusion-io?
mentioned a few years ago Fusion-io's software is applicable to
It's simply a commercial decision not to pursue that avenue in the current
unprofitable state of the consumer market. But in the long term it's one of the
reasons that the company is rated as being so valuable - because its technology
can span solid state storage from the level of Ultrabooks (with PCIe inside)
(acquired by SanDisk) -
have the makings of a serious industry platform.
- GridIron -
probably has the most sophisticated
SSD ASAP software in
the industry. (In my email I said - shame it's tied to their hardware -
an SSD HDD hybrid box. But this week - that has changed. See the notes below
for more about this.)
between the hardware layers to optimize the system within enterprise racks and
arrays - the ability to hop in with intelligence gained from another level to
tweak performance and reliability - is a genuine efficiency asset.
- SANRAD (acquired by
OCZ) is also a contender.
- Virident -
have several layers of intelligence in their PCIe SSD software. They don't like
to talk too much about the details. But it's one of the things which makes
their offering stronger than many others.
- Nimbus - started
out using a standard SSD controller in their 2.5" SAS arrays - but have
added some firmware level access points which they leverage from higher levels
And in the consumer software space I suggested:-
- Skyera - is
probably the hottest example of this. They dive in at many levels to increase
efficiency of the way they use flash.
The only real surprise in the list above to
regular readers - might be GridIron - which because they haven't been a true
pure SSD company (their main product is hybrid SSD and HDD boxes) don't get so
many mentions on these SSD pages.
- EasyCo - the very
first enterprise SSD software company which was bumped aside by the
technology wave - has found a new market opening selling their
and performance enhancing software to makers of cheap flash storage for phones
and consumer devices. It's no longer world beating IP - but it has its uses.
(And maybe attractive for future patent trolls.)
Anyway - I was reminded about the
above email exchange when I saw GridIron's
release in my email this morning regarding their
GT-1500 Data Accelerator Appliance - a 2U 12TB SSD ASAP - which can
accelerate upto 120TB of back end storage.
In one way this can be
regarded as an extrapolation of
- which was launched 3 years ago. But the difference is in the detail and
sophistication of the hotspot algorithms - which GridIron describe as "multi-zone
behavior profiling (pdf)"
GridIron have a new (to me)
marketing tagline - "Tier 0 Performance at Tier 2 Pricing" -
I don't like SSD
tiers myself - I prefer the idea of
application silos. But GridIron's summary of what they do is better than
Going back to the original question at the start of today's
Do I know any vendors whose SSD software can match or
Overall - the answer is - No. But in many important
areas the answer is - Yes.
In my ramblings today (remember this
started out as a much longer rambling email) you can see that the
SSD software market is
alive, healthy and just as competitive as the flash hardware business.
Apologies to all the other companies I could have named but left out. You'll get
your turn later.
AMD will rebrand Dataram's RAMDisk software
September 6, 2012 - Dataram
it will develop a version of its RAMDisk software which will be rebranded
by AMD in Q4 under the name of Radeon
RAMDisk and will target Windows market gaming enthusiasts seeking (upto 5x)
faster performance when used with enough memory.
STEC mini-survey suggests that 60% of serious VM users already
Editor:- August 28, 2012 - A
of visitors attending the first day of VMworld - and conducted on behalf of
suggested that over 60% of attendees already had SSDs in their
datacenters but also that less than 50% of their business-critical applications
are currently supported by SSDs.
is SanDisk really nurturing true enterprise SSD DNA?
August 15, 2012 - Do you remember FlashSoft?
Many of you still do. It was one of the
SSD software companies
before it got acquired 6 months ago by SanDisk.
of the tips in the
Guide to Enterprise SSDs - is that when it comes to SSDs - rules are
made to be broken.
And earlier this week I learned this can
apply to my own gut feel rules of thumb too. The unwritten rule being that
semiconductor companies generally make a mess of enterprise software and are not
so hot at understanding the enterprise SSD market either.
I had expected that FlashSoft would disappear into SanDisk - and would get
smothered by a marketing organization which had many times before demonstrated
its lack of awareness of the fundamentals of good enterprise SSD marketing. And
that was the tone of my parting message to the founders along with a few words
of congratulations as they disappeared into the new SNDK afterlife. I never
expected to hear from them again.
So the first thing I asked Rich Petersen -
(former VP of Marketing at FlashSoft and now Director, Marketing Management at
SanDisk) a few days ago was - how are they doing as part of a chip company?
What are they doing with the FlashSoft brand? How do they plan to develop the
enterprise SSD business? etc.
One of the things that Rich had wanted
to talk about was the release of new support in their caching software for
We spent a lot of time talking about that too - and had a big discussion about
the role of SSD software - not only as a business tool - but in effect as a new
way of virtualizing and looking at enterprise SSDs and how they can fit into
architecture models. (My view is that a powerful SSD suite - if it becomes
widely used - can be as significant to the SSD market - as a new interface
or form factor.)
We covered enough ground to write several long
articles. I'm not going to do that today - because I'm supposed to be on
vacation and sitting out in the garden by my pool.
So you should
regard this as the really really short version - and a placeholder for much
more detail which I will return to later.
FlashSoft - or the
enterprise SSD software part of SanDisk (or whatever else you may want to call
it) is today operating in a business mode which is like what you would expect
from a best of breed enterprise SSD systems company. They talk to end users like
they've always done. They learn to change important aspects of how the products
work and are sold because of feedback from end users - and not because they've
read that something is a good idea in a
There are some surprising consequences of this at the
technical and business level.
Chief among those surprises for me is
that FlashSoft says it will still support other brands of SSDs. Rich
explained this was just a pragmatic business decision. Big users told them they
like FlashSoft - but they already use or might want to use non-SanDisk SSDs.
These users are only going to standardize on one SSD software platform. They
don't want to learn 2 different ways of doing the same thing.
other hand an advantage of having access to an enterprise SSD maker is that if a
big user needs some expensive hardware on which to evaluate the benefits of
their software - then it's easier on the marketing budget to get some SanDisk
SSDs to do this.
FlashSoft's visibility into what enterprise end users
really do - and the suprising preferences they have - which are driven by
customer business optimizations rather than simplistic technical extrapolations
- also means that - like rackmount SSD companies - FlashSoft learns valuable
market lessons which can be reapplied to optimize designs in future SanDisk
Violin plugs some software gaps with Symantec
August 13, 2012 - Violin
rackmount SSDs can now support snapshots, cloning, dedupe, replication and
thin provisioning - based on software IP from Symantec.
Fusion-io does a few new things
Editor:- August 2,
2012 - the performance and strategic importance of
SSD software was
reinforced in 2 recent announcements by Fusion-io.
its new ION software
- which is a toolkit for bulding your own network compatible
SSD rack by
adding some Fusion-io SSD cards and their new software to any leading server.
The concept isn't entirely new - because oems have been doing this
with various different brands of
PCIe SSDs for years
and this is a well
established alternative market segment for PCIe SSDs. What is new - is
that it makes the whole thing much easier.
Fusion-io says this new
software product "delivers breakthrough performance over
iSCSI using standard
protocols." (1 million random IOPs (4kB), 6GB/s throughput and 60
microseconds latency in a 1U rack.)
Earlier this week FIO
it was collaborating on getting interoperability in server-side flash and
with NetApp. It's
easier now to write a list of major storage systems oems who aren't doing
something significant with FIO.
Going back to SSD software...
Microsystems created and leveraged the phrase -
Network is the Computer.
I have long thought an apt
reinterpretation of that in this decade is "the SSD is the computer"
- or maybe the "SSD software is the computer" - because the ultimate
characteristics of fast computers are determined more by the SSD architecture
which is installed - than by the same old CPU chips.
AutoCache for PCIe SSDs
Editor:- July 23, 2012 -
immediate availability of its first product - a
SSD ASAP - designed
to work with PCIe SSDs - in particular - products from
for cache sizes less than 500GB) reduces bottlenecks in virtualized servers to
increase VM density, efficiency and performance. The company says it can
increase VM density upto 3x with absolutely no impact on IT operations.
Editor's comments:- here are some questions I asked about the
new product - and the answers I got from Rich Pappas,
Proximal's VP of sales and business development.
Editor:- How long
does it take for the algorithms to reach peak efficiency?
Pappas:- It varies by workload, but typically it takes about 15
minutes for the cache to warm to reach peak efficiency.
the caching only on reads, or is it effective on writes too?
AutoCache will only cache reads, but by virtue of relieving the backend
datastore from read traffic, we have actually seen overall write performance
improvements as well. This effect is also dependent on the workload.
Amazon offers explicit SSD performance in the cloud
July 19, 2012 - There are many ways SSDs can be used inside
classic cloud storage
services infrastructure:- to keep things running smoothly (even out
IOPS), reduce running
Web Services recently launched a new high(er) IOPS instance type for
developers who explicitly want to access SSD like performance.
3 to 5 years time all enterprise storage infastucture will be solid state -
but due to economic necessities it will still be segmented into different types
by speed and function - as I described in my
SSD silos article -
even when it's all solid state.
I predict that when that happens -
AWS's marketers may choose to describe its lowest speed storage as "HDD
like" - even when it's SSD - in order to convey to customers what it's
about. It takes a long time for people to let go of old ideas. Remember
Virtual Tape Libraries?
Software is key to SSD innovation - says blog from Virident
June 29, 2012 - Dedupe and fibre-channel are some of the innovations discussed
blog by Jeff Sosa,
Director of Product Management,
who poses the question - is flash storage an incremental or a radical
Sosa's article goes on to say - "The 'radical'
innovation in the host-attached flash storage marketplace today comes from
products that not only access flash through a PCIe connection, but also bypass
storage protocols to drive new levels of performance and enable new
functionality not previously imagined." ...read
Nutanix announces a new NFS for PCIe SSD accelerated CPUs
June 12, 2012 - Nutanix
announced the general availability of NDFS (Nutanix Distributed File
System), a bold new distributed filesystem that has been optimized to leverage
localized low latency PCIe
SSDs such as those from Fusion-io.
shifting the NFS datapath away from the network directly onto the VMware vSphere
host, NDFS bypasses network communications that have historically been fraught
with multiple high-latency hops between top-of-rack and end-of-row switches.
Nutanix accelerates both read and writes for any workload.
availability are achieved by data mirroring across high-speed 10GbE
Editor's comments:- Nutanix is in the
SSD ASAP market -
equivalency architecture integrated in the OS. The company says their
architecture "collapses compute and storage into a single tier." You
can get the general idea from their
STEC releases SSD cache software for any make of SSD
June 6, 2012 - STEC
the general availability of the company's
EnhanceIO SSD Cache
Software for Linux and Windows environments with pricing starting from $295
and $495 (per server) for a 1 year subscription.
STEC says its SSD
cache software can used with any vendor's
In addition, a Linux version of EnhanceIO SSD Cache Software, based on
caching module, will be made available under a general public license
"As one of the original architects of Flashcache, I'm extremely
pleased to see this technology being enhanced and supported by STEC in their
EnhanceIO software," said Mohan Srinivasan, software engineer at
Facebook. "Flashcache has proven to be an invaluable tool for accelerating
application performance at Facebook."
Users can choose from a selection of caching schemes and block sizes
to suit their preference and SSD's capabilities. STEC stores the metadata for
the cache in system DRAM rather than in the SSD. The DRAM required for the
cache is 0.1% of the cache size so a terabyte of SSD cache requires about
1GB of DRAM support. Product support tools include a profiler which can collect
user data and suggest the best policy option parameters for the cache setup.
comments:- irrespective of the technical strengths and weaknesses (and
pricing model) of the this new product compared to other competing
SSD ASAP / caching
offerings - one question which immediately springs to mind is this.
serious is STEC about making this software work as a standalone product? And if
it becomes successful will the company be tempted to bundle it free with its
NEVEX offers free trial of $5K value Linux caching software
May 29, 2012 - NEVEX
says it's offering the 1st 30 people who trial its
SSD ASAP / caching
software for Linux - the option to keep the production version free.
spoke a few minutes ago to Nigel Miller,
VP Business Development, NEVEX - to test if his phone number is correct -
because that's the response mechanism.
I asked how much can some one
save by taking up the offer?
He said the regular price will be $5,000
per cached terabyte.
I also said it was unusual in the web industry to
have nothing on their web site about this - and he said they wanted a quick and
easy way to talk to people. He also said that if you are one of the early
responders you will get good access to their technical support people. As time
is of the essence here's the number if you're interested:- +1 647-393-2200
welcome to the new alchemy - converting SSD software to gold
Editor:- May 29, 2012 - a new blog today on StorageSearch.com
where are we
now with SSD software?
over 30 years
the SSD market operated in a software near vacuum. Why did it take so long
for the systems software industry to do anything useful with SSDs?
why are seemingly insignificant little SSD software companies today being
gobbled up at prices which seem to have no connection to what they could ever
earn from license sales? ...click to read the
60 seconds to make your SSDs accelerate even faster
May 8, 2012 - VeloBit
1.1 of its SSD caching software for Linux called HyperCache. (VMware and
Windows versions are in Beta.
Editor's comments:- I spoke to
VeloBit's CEO, Duncan
McCallum about the company and the new product.
Like many other
SSD ASAP software packages HyperCache ducks the problem of how to manage
environments by effectively only caching host reads and bypassing the caching
SSD when doing host writes.
Duncan said the software is efficient in
its use of host resources. It takes up less than 3% of host server CPU cycles
and about 2% of RAM (compared to the capacity of the attached SSD cache).
is VeloBit's caching software different?
In use - the company says its
content locality caching
uses the signatures of the data patterns which already are used
frequently and have lots of references in order to predict and prioritize the
caching of similar looking data. In that respect - the cache manager is
learning something which is unique to that apps environment rather than simply
caching blocks based on where they are address-wise relative to the current hot
In its business model - Duncan said he wanted to make VeloBit's
software easy to adopt and install via web marketing. A design goal was to make
HyperCache capable of being installed in under 10 minutes. He said the new
launch version typically installs in under 60 seconds!
tested their software with SSDs in various form factors from leading companies
commented that when it came to
PCIe SSDs - they found
their software produced the best results with Virident - which he said
produced the fastest SSD caching results of any SSD they had yet tested.
aspects of VeloBit's offering (to me) look similar to many other previous SSD
software products:- internal compression, write attenuation, real-time dedupe
and pricing on a per CPU basis.
With so many companies vying for the
same customer share of mind the thing which stands out for me is the 60 seconds
install time. Even allowing for a degree of future software bloat - the slowest
part about acquiring new SSD ASAP software could soon become typing in your
credit card details.
Permabit launches SSD dedupe software
25, 2012 - -Permabit
a low latency
engine (pdf) which has been
for flash SSDs and which is scalable to millions of
The product is aimed at SSD appliance makers.
NVSL paper discusses kernel adaptations to unfetter fast SSDs
March 8, 2012 - a recent white paper -
Safe, User Space Access to Fast SSDs (pdf) - published by academics at
Volatile Systems Lab) at UCSD - discusses techniques for reducing kernel
associated overheads in the filesystem by an order of magnitude without removing
security and file permissions.
The authors say - "Our intent is
that this new architecture be the default mechanism for file access rather than
a specialized interface for highperformance applications. To make it feasible
for all applications running on a system to use the interface, Moneta-D supports
a large number of virtual channels. This decision has forced us to minimize the
cost of virtualization."
Micron acquires Virtensys
Editor:- January 20, 2012
- Micron today
it has acquired the assets of UK based Virtensys which marketed
rackmount SSDs stuffed
with Micron's PCIe SSDs and supported by a patented multi-server sharing
Editor's comments:- if buying an
SSD software company
was a good idea for leading
PCIe SSD makers
OCZ - then Micron has to
follow suit or get out of the game.
Chipmakers generally dislike
buying "systems" software companies - because they don't understand
systems and risk alienating their oem customers. But Micron's reputation won't
be dented if they can't leverage the Virtensys software. Everyone knows how hard
it is to get real value out of a software acquisition. And in the next few weeks
more people will take another look at Micron's
Micron's SSD pages.
So it's paid for itself already.
OCZ acquires SANRAD
Editor:- January 10, 2012 -
has acquired SANRAD
for $15 million.
"SANRAD's software is a wonderful complement to
OCZ's Flash technology," said Oded
Ilan, CEO of SANRAD Inc. "We are excited with the opportunity
created by this unique combination between storage virtualization, caching and
PCIe Flash storage."
Editor's comments:- this makes the
4th SSD IP or company acquisition that OCZ has done that I've written about on
these pages. 3 out of the 4 have aimed squarely at the enterprise SSD market.
SSD software will be
a powerful sales and business growth accelerator for
PCIe SSD companies in
2012 - as it will open
up new market opportunities much faster than previously possible with human
engineering assets. Put simply - it's let the software solve the problem of
integrating the SSD. It's more than simply
auto-tiering - but
that's an important enabling tool as well.
SANRAD was also the 1st
company to ship front loadable PCIe SSD modules BTW.
the New Business Case for SSD ASAPs
December 6, 2011 - StorageSearch.com
today published a new article -
the New Business
Case for SSD ASAPs .
What's an SSD ASAP? - When I use this term it
It's going to be a huge market. SSD
ASAPs are 1 of the 6 main SSD product types that will be around in the pure
solid state storage datacenter of the future in the
- auto-tiering SSD appliances
- SSD cache - the automatic kind
- SSD acceleration As Soon As Possible
- Auto-tuning SSD Accelerated Pools of storage
- combinations of the above
The word "new" in the title is deliberate. It
replaces an article I wrote about SSD ASAPs when the market started in 2009.
Since then - my thinking - and that of key players in the market has
developed. This should no longer just be regarded as a tactical market to bring
the advantages of SSD acceleration to legacy hard drive arrays. ASAPs are an
essential interface between different levels of SSD storage. ...read the article
analyzer suite could speed up auto-tiering SSD evaluations
November 28, 2011 - hyperI/O
availability of its Disk I/O Ranger software analysis tool for Windows
The company says this will help users diagnose and
understand disk storage access performance problems and to to verify that QoS
levels are being met at the application/file/device level. It could also
simplify the evaluation of
appliances by collecting real-time metrics.
2 Software Companies entered the top 20 SSD list in Q3
October 3, 2011 - As Q3 fades back into
- I'm busy analyzing stats and writing my comments for the new (18th quarterly)
edition of the top 20
SSD companies - which will be published here next week. This analyzes the
search stats of over 350,000 online readers in the past quarter who
have seen SSD content from StorageSearch.com
and the article includes my views of where these companies are heading in the
market (whether that's up, down or nowhere).
This is the 1st time
have entered the top SSD companies list. One them -
IO Turbine - would
have crept in towards the bottom end of this list - if it hadn't already
been acquired by Fusion-io.
The other one - which no one has acquired yet - did even better. Stay tuned..
SDS shrinks SSD IOPS in VMware
Editor:- September 15,
2011 - the use of
with VMware has popped up in these news pages in recent years more times
than I care to count. But I got a new angle on this a few days ago in a
discussion with Linda LaPorta,
President of Superior
Data Solutions .
Now you may ask - who is SDS? (the spelling is
important here) and what do they know about SSDs? (It had been several
years since I last heard from them too.) But you've all heard about
STEC's ZeusIOPS - right?
- Well SDS was
selling this particular enterprise flash SSD design in 2006 - before STEC
acquired it from Gnutek.
An SDS platform was also one of
Sun's early SSD offerings
too. But SDS have switched focus from raw hardware to applications - and they
are the US distributor for a product called
LaPorta told me - "...Our software is changing the game in VDI. Right now
IOPs is a big barrier to the acceptance of VDI because the cost to implement
storage can be very high. (Windows 7 users are figuring 24-28 IOPs per VM
pricey if you need to provision HDAs for 10,000). We need a fast IO device to
store the virtual applications. We like a fast SSD, but it only needs to be 100
to 200GB. It is a read only drive that stores the master image of each
application. All the VM's go to a well cached
raid system. This is
where we reduce the IOPs to 2-4 /VM and we keep the capacity requirement
to 3GB/per VM (which is actually making it AFFORDABLE to consider all SSD
instead of HDDs)..."
SSD impacts on future storage software?
11, 2011 - I recently had a conversation with a very knowledgeable
strategist at a leading enterprise storage software company. I won't say who the
company is - but if I did - most of you would know the name.
interesting thing for me was that he'd recognized that if the hardware
architecture of the datacenter is going to change due to the widespread adoption
of solid state storage - that will create new markets for traditional
software companies too.
And I'm not talking here about new
software which simply helps SSDs to work or
interoperate with hard
drives - but software which does useful things with your data - and which
can take advantage of different assumptions about how quickly it can get to that
data - and how much intensive manipulation it can do with it.
try some of these intensive R/W tricks on current storage infrastructures -
even if the front end is SSD accelerated - then your systems will hang. But in 3
to 5 years time - the ability to perform random IOPS on archived data
hundreds of times faster than today - will make backup and recovery faster,
and enable new apps to analyze and monetize data assets in a way which goes way
beyond what even Google can do today.
I find it encouraging that
these conversations are now taking place.
Because the way to the
future isn't just doing the same things faster. The future of SSD enabled
markets will be doing things which could never be done before.
will you read more about those new developments? You're already in the right
virtual server acceleration mistakes
21. 2011 -
5 Mistakes to
Avoid when trying to solve I/O Bottlenecks in Virtualized Servers is a new
article by IO Turbine
published on StorageSearch.com.
to say most of the discussion in here revolves around the best use of SSDs.
Among other things - IO Turbine says "While many enterprise-class storage
providers offer automatic tiering with data migration to and from the SSD
storage, these solutions typically take place well after the need for the I/O
acceleration has passed." ...read the
| Spellerbyte's software factory|
7 silos for enterprise SSD
trust SSD market data?
where are we now
with SSD software?
how fast can your SSD
really changed in SSD year 2013?
|"In 2014 we'll see
the battle lines for the SSD Platform being drawn up - as vendors all try to
convince you that any plans you make will be more future-proof if you use their
SSD idea changes in 2013?|
|do more tiers reduce waste?|
|Editor:- October 15, 2013 - The wastefulness of conventional storage
tiering is discussed in a recent blog -
On From Storage Tiering - by Chris M Evans, publisher
of Architecting IT - who
advocates the concept of having an infinite number of tiers so that - "each
server will be closer to receiving the performance level they need."|
goes on to say - "If we can deliver that, move the data between tiers
dynamically and fix the wasted capacity issue within each tier, then we have our
ultimate storage device." ...read
Editor's comments:- The problem with
implementing this is that the most economical way to design storage systems is
still dependent on the likely speed and capacity characteristics.
buy products and they have to understand the differences between the products
they see in the market. (That job of segmentation is just as important for
marketers to implement precisely as the
easier bits they
spend more time and money on.)
When I analyzed all the different
types of SSDs you need in the datacenter - from the architecture and use cases
point of view - I got to about 7 different types - which are distinctly
different - as described in my
SSD silos model -
which covers the spectrum from ultrafast RAM to archive solid state storage.
SSD product which has been optimized for any one of these distinct uses will
be uneconomic or less competitive for the other uses.
think infinite tiers - as proposed in Chris Evans's blog - can exist OK as
logical concepts in software
- but these infinite tiers will still have to map onto a distinct set of no
more than maybe 3 to 4 different physical SSD tiers in most customer sites.
Otherwise they will be wasteful and too expensive.
In the currently
forseeable state of semiconductor
technology - the bounds of physics and
architecture favor designs in which you know in advance what kind of use the
memory cell population in each part of your SSD is being optimized for.
for a sprint requires a different care conditioning regime to training for a
Although you can switch streams and repurpose cells
dynamically - which is what
is all about - this is done within the context of knowing which kind of race the
SSD is in from the outset. Running half a marathon fast and then dying due to
is not an attractive product option.
|"why are so many
companies piling into the SSD market - when even the leading enterprise
companies haven't demonstrated sustainable business models yet?"|
|hostage to the
fortunes of SSD|
Self-Encrypting SSDs (if you think you might need a future data recovery)..."
|That's the "advice" in a blog
SSDs: Flash Technology with
Risks and Side-Effects (August 2013) - by Kroll Ontrack - which
goes on to say - |
"This type of encryption is very secure, but
ensures total data loss in the event of a failure. With SEDs, the encryption
keys are only known to the hardware manufacturers and will not be released.
What this means is in the event of a failure, the data is no longer accessible
to professional data
| what shoes does the
Princess need now?|
|Editor:- July 29, 2013 - One of the
latency reducing tricks in a world where every SSD vendor has access to the same
flash memory and
interface chips and choice
is the applications magnifying power of
SSD software. |
the way that new SSD software gets things done faster is to avoid doing
some things at all - by carefully discriminating between - what needs to be
done - compared to what would normally get done in blind obedience to
One of the ironies of legacy systems software running in
flash systems is the way that the data weaves through layers of fossilized
unreality where emulation is stacked on emulation - and hardwired into
the software and data flow logic are the remembered
once-deemed-to-be-efficient solutions to data flow control problems whose
origins are now almost forgotten.
So the SSD emulates a hard drive.
And the hard drive emulates memory.
And it gets worse.
The fetching and prefetching and polite but useless flurries of activity which
happen behind the scenes makes it appear more like a bunch of courtiers in a
fairy tale palace reacting to this simple request.
The Princess needs
What shoes? What color? What style? What for?
hasn't said yet - just get as many shoes as you can carry and be quick about it!
Yet despite all this background mayhem the application - somehow -
still runs faster on SSDs than on the old hardware. (And the Princess has never
been seen in public without wearing appropriate footwear.)
way to save time (improve latency) is to say - what if instead of just
speeding up all the tangled processes of emulating a hard drive emulating
memory and worrying about all the old fossilized limits of packet sizes and
flow control in drives and interface cards which no longer exist except in
museums but which have been preserved in legacy software - we instead
make an effort to write some new software which knows it's operating in a flash
world and doesn't have to recite old HDD spells to charm the data?
what-if the Princess knows where the shoe room is - and rather than wait -
she's going to get the shoes for herself?
The implications of these
what-if? results (for SSD software) are easy to anticipate and we've seen what
happens when these ideas have
found their way
into SSD benchmarks but
it still takes time for these new ideas to work their way into standard software
And if the Princess changes her mind between the time she
sets off to the shoe room and when she gets there - she's still going to get the
shoes she wants quicker than
if she asked her maid.
All of which is a preamble to say that Fusion-io last week
that its Atomic Writes API contributed for standardization to the T10 SCSI
Storage Interfaces Technical Committee is now in use in mainstream MySQL
databases MariaDB 5.5.31 and Percona Server 5.5.31.
Princesses prefer not to be kept waiting.
|"SSD is going down! -
We're going down!"|
If you've ever watched the movie - Black Hawk
Down - there's a memorable scene in which...
sudden power loss|
|If you've seen or read -
The Hobbit - then you'll be familiar with the concept of the riddle game.
Something similar is playing out now in the enterprise flash array
The setting? I forgot to mention this.
The hero - a mythical hobbit-like creature called "User" is
trapped in a high gravity well / force-field - just outside the entrance to a
cave in which are stored great treasures.
| playing the SSD box
inside the box|
|Editor:- May 29, 2013 - If you're an enterprise
user who is already sold on the idea of using more SSDs - what could be
better than a great new SSD drive?|
If you're an SSD
vendor looking for the magic formula to open up vast new untapped markets
for SSDs - what kind of solution do you need to offer to attract enterprises
who aren't at the sharp end of the performance pain curve, are content with the
speed they get from HDDs and who aren't even looking at SSDs for their network
These problems have been preoccupying the SSD industry's
smartest thinkers for years.
And their answer to both questions is
the same. (Although details vary).
It's a new type of SSD box.
A new generation of enterprise SSD rackmounts is breaking all the
rules which previously constrained price, performance and reliability. The
sum impact of cleverly designed SSD arrays is systems which are many times
more competitive than you would imagine from any tear-down analysis of the
g about rackmount SSDs is explored in the new home page
blog on StorageSearch.com
thinking inside the box.
|EMC's flash educational
|Editor:- April 15, 2013 - I've been saying for
years that any simple analysis - like my
enterprise silos model
- makes it clear why no single flash product (or supplier) can economically
satisfy all requirements.|
first idea is graphically encapsulated in a video
by EMC which they call "FLASH in a flash" which - because I'm
not a fan of SSD videos
- I only saw for the first time today.
This video also introduces a
smart and almost apologetic way of positioning
hard drive based
storage - as being for applications which can "tolerate multi
That's clever - because they know most
of you already have these HDD systems, and EMC is best known for these slower
rotating storage systems. That's how they get you to lower your guard by
introducing the familiar.
The 2nd half of the video - which is not so
good as a general flash video - suggests that EMC is the best supplier to look
at because it's got 25 years experience in storage.
In my view that
argument doesn't logically follow.
Experience in something that's
different is irrelevant. It's like suggesting that breeding horses would
have made Ford
better at designing engines.
Nice try by EMC marketing at subtle SSD
sales sophistry by linking irrelevant concepts though.
impact from RAID rebuilds becomes compounded with long rebuild times incurred by
mutli-terabyte drives. Since traditional RAID rebuilds entirely into a new spare
drive, there is a massive bottleneck of the write speed of that single drive
combined with the read bottleneck of the few other drives in the RAID set."|
CEO - SolidFire
- in his recent
blog - Say
farewell to RAID storage (March 14, 2013).|
RAID & SSD
|the Modern Era of SSDs |
|Editor:- January 2, 2013 - My recent home page
Transitions in SSD - mentions some of the key changes in the SSD market
which took hold in recent quarters - but as we're starting another new
calendar year in SSD - I want to say more about the context here.|
in a market which appears to be so fast moving as the SSD market - where hot new
SSD companies can enter the
top SSD companies list
(ranked by search) within weeks of exiting stealth mode, and some new
SSD companies are
acquired within a few quarters of launching their first product - it can
still take years before new technologies which excite technologists,
investors are adopted by more than 10% of SSD users.
strategic multi-year big changes and transitions which are sometimes hard to
pin down to a single year. For example the transition in the enterprise SSD
market from RAM
to 98% flash - which took 8 years.
Although it's easy to
recognize the start of new technology changes - it's harder to be so precise
about big market shifts - because those - by their very nature - occur only
when enough people get hold of a new way of doing things and change their buying
at the SSD market - 2013 now clearly marks the 10th anniversary of a
distinct market period which I now think of as - the Modern Era of SSDs.
What do I mean by the Modern Era of SSDs?
It's when SSDs
changed from being a niche tactical technology which satisfied the needs of
some markets (ruggedized military / industrial storage and next generation
server acceleration at any cost) to a time when the market advance of SSDs as a
significant well known core market within the computer industry became a
historical inevitability - and when the only serious technology which could
displace an SSD from its market role was another SSD.
products which we would recognize as enterprise SSDs were shipping for several
years before 2003 - it was in that year, 2003 - when there was enough confidence
in the minds of enough people in the SSD market that the future of SSDs could be
much bigger (100x bigger) and different to what had happened before.
wasn't simply my publication of
an article at the time
which explained why this could happen - nor simply the immediately post
publication discussions I had with SSD industry leaders at the time - nor indeed
in later years when founders and managers of new SSD companies kindly
told me that some of their thinking about the possibilities for the SSD
market had been influenced by those earlier articles on StorageSearch.com
It's just as much the case that the alternative futures which could have knocked
the SSD market off-course (such as faster CPU clock rates,
hard drives or faster
optical storage) didn't
The year after year "no-shows" by SSD's past
phantom demons were just as important as the new SSD technologies which did put
in an appearance.
Today it's clear to anyone looking seriously at
the data economy - the SSD market is here to stay and has its sights set on
being at the center of your future hardware and infrastructure decision making.
to big upcoming changes in SSD market thinking?
Can I say anything
at all useful at this stage about what the 2nd decade of the modern era of SSDs
will be like?
I think it will be the time when a critical mass of SSD
users become more sophisticated in their understanding and use of different
types of SSDs - and when each part of the SSD market becomes less generalized
and more focused.
It's not just about the
SSD software, and
iit's not just about the
SSD chip technologies.
These simply outline possibilities. What's important - and what will become even
clearer - is the dividing lines and colors of application specific SSDs.
specific enterprise SSDs - is a technology trend which started
shipping more than 3 years ago. But - as I said above - markets happen when
enough people have decided to make them happen - and not simply because
pioneering products are available.
|"In some ways, blocks
lost due to media corruption present a problem similar to recovering deleted
files. If it is detected quickly enough, user analysis can be done on the
cyclical journal file, and this might help determine the previous state of the
file system metadata. Information about the previous state can then be used to
create a replacement for that block, effectively restoring a file."|
CRCs are important - blog by Thom Denholm Datalight (January
|In October 2002 - StorageSearch.com's
editor talked about the role of software versus human-ware in enterprise hot
"Until the storage management software
you run in your orgazination is intelligent enough to learn by itself what kinds
of applications you're running, and optimize the characteristics of your
different types of storage devices, your ability to make the best use out of new
storage technologies such as SSDs will be limited by your own technical skills
and the amount of work and effort you are prepared to put into solving your own
performance and resource utilization problems."
|Ancient storage software
management inhibits roadmap to $5 billion enterprise SSD market -
StorageSearch.com's news page blog (October 2002)|
|In November 2002
- Bill Gates, talking about Tablet PC's said:- "There are also a
lot of peripherals that need to improve here. ...Eventually even the so-called
disks will come along and not only will we have the mechanical disks going
down to 1.8 inch but some
kind of SSD... will be part of different Tablet PCs."|
|"In May 2003 -Imperial
Technology launched the world's first SSD tuning software tool called -
WhatsHot SSD - which analyzed real-time file usage on the SAN to identify
hot-files to place in SSD."|
|"In May 2004 - the
SPARC Product Directory published an article -
Why Sun Should
Acquire an SSD Company - which argued that integrating SSDs into Sun's
Solaris OS and servers would result in the fastest database servers and more
than make up for speed deficiencies in its SPARC processors."|
|In November 2006
- Microsoft announced business availability of its new Vista operating
system - loudly heralded as being the first PC market OS to include SSD-aware
support and native SSD cache management.
Vista (whether for SSDs or HDDs) proved to be so good that for
years after its launch millions of professional pc users upgraded back to XP.
|"In August 2007 -
EasyCo launched its "Managed Flash Technology" software to
enable enterprise grade RAID-5 arrays built from consumer grade flash
SSDs. MFT boosted SSD writes while also improving endurance..."|
SSD history - 2007
|"In September 2009
- Dataram launched the XcelaSAN - a fast 2U rackmount
SSD ASAP (auto
accelerating appliance) which automatically identified hotspots to relocate
critical data. The company said the XcelaSAN would automatically learn and self
optimize during the 1st few hours of operation..."|
SSD history - 2009
|In November 2009 -
Google opened its doors to developers who wanted to work with
Chrome OS - a new operating
system for tablets.|
In the opening video of the
OS blog we learned that the architects of the new OS were "obsessed
with speed". And the new netbook OS was designed from the ground up to
support only flash
SSDs as the default mass storage.
Google said - there is no room
in this OS for outmoded 50 year old
hard disk technology.
|how fast can your
SSD run backwards?|
|SSDs are complex devices and there's a
lot of mysterious behavior which isn't fully revealed by benchmarks, datasheets
and whitepapers. |
Underlying all the important aspects of SSD behavior
which arise from the intrinsic technologies and architecture inside the SSD.
Software - by Zsolt Kerekes,
|The most common type of "storage
software" is that which does
backup or replication.
But there are a lot more different types of storage software than that.|
for making the hardware work with the OS are a good example. But since they
nearly always come from the same
IHV - there's no point
in listing such products here.
analyzing storage can
range from simple storage bus analyzers which help developers debug driver code,
and freeware which looks at bottlenecks in your database upto SAN wide
heavyweight packages which help you understand and manage an enterprise storage
network. And while on the subject of SANS - a few brave companies like
Wasabi have developed
what are in effect NAS
As WAN storage networks have become more common the
concept of accelerating or deduping the communications payload has also received
a lot of developer attention. A leading pioneer in the IP acceleration software
market is NetEx, whereas
the list of storage
ISVs mentioned on these pages already runs into double digits.
Security is a big subject
- which has had its own pages for many years. And the
market which started out as a software solutions market has expanded into
hardware - because it takes too long to erase hundreds or thousands of
discarded hard drives (or tapes) using software on a single PC.
companies offer software downloads to help you with simper recovery tasks. But
when your hard drives have been charred to smoky plastic or immersed in the
sludge waters of a flood - a UPS or Fedex upload is a more realistic solution.
all of that sounds too complicated then there are plenty of independent
companies to help you do it yourself or (if you've got enough money)
companies who will take the problem of managing it all off your hands.
like to talk about "lifecycles" - because it makes it sound like
they've actually thought about what will happen to their teenage hacker
developed code, or your data, for more than 5 minutes.
is another word which has been fashionable in recent years. Although every piece
of software that's not part of a hardware driver or OS kernel already includes
many levels of assumed virtualization.
One part of the lifecycle
which ISVs don't talk about so much - is that part when they are no longer in
business having gone bust
or been acquired. But it doesn't seem to stop new startup ISVs going to
the local storage VC in
the wall machines to request funding.
There are thousands of
storage software related stories in our