why the
caching tiering
hybrid SSD appliance hockey stick sales curve is flat...by
Zsolt Kerekes, editor - StorageSearch.com - May 28, 2010
|
|
why's no one buying
auto-tiering / auto-caching SSDs?
the Problem with
Selling Revolutionary SSDs to Risk Averse Technology Laggards |
Editor:- May 28, 2010 - OCZ yesterday
announced
that its SSDs have been qualified for compatibility with Adaptec's MaxIQ SSD
caching products (which are
SATA compatible
SSD ASAP controllers
for Auto-tuning SSD Accelerated Pools of storage).
Originally
launched in
September 2009 to
interoperate with Intel
SSDs - Adaptec's product was in the forefront of a new wave of products
designed to automate the acceleration effect of mixing SSDs with legacy HDD
arrays in hybrid
storage pools - a paradigm discussed in an article here on
StorageSearch.com which was aptly
titled Using SSDs to
Boost Legacy RAID and Database Performance (published in 2004).
The
idea (of using a small capacity SSD to cache a large capacity HDD array) has
been around for more than 20 years - to the dawn of the
RAID systems concept and
this kind of speedup challenge was my #1 product design priority when I
joined a hot start up called Databasix in 1987. I didn't solve the problem -
but during the next several years learned a lot about performance measurements
and how to achieve wire speed writes in rotating (and solid state) storage
arrays.
The SSD automatic tuning / optimization problem has never
been successfully solved in a way which is economic for all applications.
Personally I don't think it ever will be. Like all SSD acceleration
schemes (except those which use 100% SSD without HDD) - the application
speedup is environmentally specific. It depends on the server environment,
the shape of the user process demand curve, the spatial distribution of
hot-spot data and the compatibility of the caching algorithms compared to the
real-life data flows.
Vendors in this market space face a tough
challenge - which is that their ideal customers are mostly SSD virgins who are
conservative by nature and who have opted out of earlier generations of SSD
acceleration because their installations were not big enough to make human
tuning feasible - or because the cost benefits of SSD speedup versus server
consolidation were not big enough to justify the risk in their own business
operations.
That means SSD ASAP companies have been offering
revolutionary products to potential customers who by their own choices
behave as technology laggards.
No surprise it has not resulted
in any hockey stick sales growth curves (yet). But I have no doubt that when
some companies do prove that their way of doing SSD ASAPs is the safe way to go
for specific applications (and specific server environments) - it will become
a huge market. The clock is ticking. The SSD ASAP market is only viable
until the SSD array
cost of ownership model falls below that of HDDs |
..Much later:-
The
problems of being a new enterprise SSD supplier in a traditional storage market
were discussed in these later articles:-
|
........................ |
 |

| |
 |
.. |
...Later:- discussing these problems with
some readers I used the idea of concatenated probabilities to explain why new
vendors in this market were seeing slow sales ramps.
1st - the user
has to believe that SSDs will solve their problems (and the ideal customer
segment served by ASAPs has already resisted SSDs until now).
2nd -
the user has to be convinced that your own proprietary SSD ASAP algorithms are
the best match for the application and server environment they've got - and you
(the SSD vendor) will still be around in a few years to support them. Because
unlike vanilla SSD accelerators - these ASAPs will only do speedups based on
algorithms which are designed into the box. What if the algorithms aren't quite
right? - and need to evolve... If the vendor is bust - the user is stuck with an
expensive storage box which performs no useful functions at all.
Satisfying
both conditions concurrently is a much smaller probability than satisfying
either one. And to make things worse - establishing the reputation consistent
with calming nervous user FUD could take longer than the typical market life
of a new SSD product. |
........................ |
Retrospective
It did indeed
take years longer than pioneering SSD vendors had originally expected to get
the SSD auto-caching / auto-tiering market established.
It wasn't until
2013 Q4 that a
company whose primary business was bybrid SSD arrays -
Tegile - entered the
Top SSD Companies list.
Partly it was because the job of establishing exactly where the
boundaries of the SSD software element in these products should be drawn was
unclear.
Another reason was that the earliest appliances lacked
features which most users considered to be essential - such as including an
acceptable way of handling
fault tolerance.
And
another reason was that early appliances were limited in their scalability
(assumptions about SSD capacities were too restricted, or assumptions about
back-end storage were too short-term (HDD speed-up view only - and useless if
back-end storage was also fast-enough SSD.)
And another reason was
sheer complexity.
Users were faced with too many different types of
products which all claimed to solve the same kinds of problems - but which
involved completely different types of approaches.
- appliance in a box - with the SSD and software and HDD already integrated
- appliance in a box - with the SSD and software inside - but designed to
attach to pre-existing legacy HDD storage
- appliance as software - to run on the server - which leveraged PCIe SSDs or
other SSDs from various 3rd party oems
- appliance as software - to run on the server - which leveraged SSDs from
the same vendor
- appliance as software - to run on the server - which was pre-integrated in
the SSD itself
- combinations of the above
| |
.. |
|
.. |
| |