 |
|
|
|
leading the way to the new
storage frontier |
...... | |
.... |
Decloaking
hidden and missing segments in the analysis of market opportunities for
enterprise rackmount flashby Zsolt Kerekes,
editor - StorageSearch.com
- May 28, 2014 |
One of the challenges
for the enterprise SSD market when designing new rackmount products is to
understand complex customer needs and decision criteria - which go beyond the
traditional bullet points.
New segmentation models are needed because
the enterprise SSD market is moving into uncharted territories and use cases
where a considerable proportion of the customer needs which affect buying
behavior are still formally
unrecognized
as being significant (in market
research data).
Additionally - inside user organizations there
are many internal preference factors which are rarely articulated as
being "SSD technology related needs" because they are considered
to be intuitively obvious by users who expect that their special needs
should already be known or inferable by competent SSD vendors who want to sell
to them.
In most markets these vendor knowledge gaps would be fed back
up the marketing intelligence chain by a direct sales force.
But
these market adaptation and correction processes can take years to deliver
effective results.
And just as "day zero" malware can remain
undetected by anti-virus software so too can "year zero" patterns of
mismatched enterprise SSD customer needs fail to be recognized by SSD vendors
who simply move on their attentions to easier / less problematic customers or
resort to the tactic of reducing their selling prices.
The result
is that too many factors at play in enterprise SSD market behavior still
don't appear as explicit assumptions in SSD product marketing plans.
Another
contributory cause for gaps in segmental understanding has been the
continuing pace of disruptive innovation in enterprise SSD-land - which has
meant there hasn't been a stable market template for vendors to follow from
one seemingly chaotic year to the next as they encroach on new markets.
Smaller
nuances of user behavior (which are easier to discern as patterns in a
stable market) easily get lost under the noise created by headline
technology changes and the market's apparent willingness to slaughter and
discard once loved past industry leaders.
Whatever the root causes
- the result of these "hidden segments" is that nearly all new
enterprise SSD hardware products fail to achieve to achieve more than a
small fraction of the business potential imagined by their creators and
investors. Simply because they fail to satisfy these alternative
(customer-based) views of viable SSD adoption reality.
This new article will attempt to outline some key segmentational
factors in the rackmount SSD market which appear to have been mostly
overlooked, underrated or neglected for extraction as explicit segments -
and which I think deserve more attention, analysis and action by the SSD
industry to create a more efficient market which works better for all
stakeholders.
the segment multiplication factor of software
As
little as 5 years ago (in
2009) the
enterprise SSD software
market barely existed.
Since that time it has grown in
complexity, aspirations and strategic significance. To the degree where in an
earlier article
I said - "The winners in SSD software could be as important for
infrastructure as Microsoft was for PCs, or Oracle was for databases, or Google
was for search."
But the rush to fill many different perceived
market vacuums (different in
architecture,
different in roadmap assumptions and
different in analysis of
the core problems to be solved) has been a
confusing mess
- which will look more tangled in the next few years rather than simpler.
As
a result - the segment multiplying factors related to SSD software go way
beyond such simplistic questions as:-
- what's the OS, or
- what are the main apps which the SSD will work with?
There was a
hint of the shape of things in my 2010 article -
a new way of looking
at the Enterprise SSD market - in which I pointed out the simplifying
advantages of dividing all enterprise SSDs into 2 parts:- Legacy and New
Dynasty.
And that's a convenient place to start when looking for
possible SSD software segmentation categories.
But if you thought
that x2 would be an adequate segment multiplier - then think again.
In
the diversity of products we see today there are already many subsegments
within each of these 2 top layers.
Let's look at Legacy software
It
sounds simple enough. The SSD software which enables a given SSD to work in a
world which was originally designed for hard drives. This happened in waves over
a 10 year period.
pre 2007
- fit into the system by looking like an enterprise hard drive (or rack) -
but one which runs a lot faster.
2007 onwards
- As array vendors adopt MLC instead of SLC - endurance become a problem.
Do something in the software to mitigate endurance and report status.
2009 onwards
- make it easier to fit into the system by automatically finding data hot
spots and caching them.
- make it easier to fit into the system by tiering as well as caching (and
adapting to customer's preferred virtualization schemes).
- add rudimentary high availability support to any of the above - when lack
of HA looks like it might slow down sales.
- add management and efficiency features to satisfy customers who are using
a lot of the products which you introduced before.
the present day
Even
having all the features above isn't a good enough feature set any more.
Some
apps and some combinations of environments are much more popular in the market
than others. These favored legacy apps - are getting revisited by newer
generations of SSD software - which use their knowledge of the legacy apps to do
SSD specific improvements.
And the legacy apps are acquiring SSD
intelligence too.
Can we still call them legacy apps? (VDI, database
etc)
Yes. But there are different degrees of conformance to what was
being done before SSDs and how they work, and how far the SSD software can
reach and the consequences of changes.
Not all users will be
comfortable with switching on all the new features. Because it's a way they can
be suckered out from what they regarded as "safe" platform choices
with known legacy suppliers - and drift towards new dependencies and a lock-in
with the new suppliers.
Let's look at New Dynasty software
It
sounds simple enough. New Dynasty is a software environment and architecture
which is planned at the outset to operate with SSDs.
But there are
many ways of doing this even if you start out with the idea of only
looking at standard servers and standard SSDs. Because adding SSD software
into the mix brings its own multiplication factors.
What does a server
node look like?
How is it clustered or scaled?
Is the server
node part of the storage?
Is the server node a building block for all
the storage?
Where should the storage live?
How should it
be tiered?
And
BTW - we're now more than willing to
tier memory too.
There
are lots of different architectures being offered under the disguise of SSD
software or "software defined storage".
And what about
switching costs?
The SSD software market has been running long enough
now for some vendors to have become an established platform in some types of
application silos.
The
biggest of these should be regarded as segments too - because you're not easily
going to replace them with a generic something else.
Legacy
interacting with New Dynasty
I'm just throwing this in as a
reminder that in many organizations - users will be using more than one kind of
SSD architecture at the same time to tactically solve different problems.
That
in itself can lead to new customer needs, and new product types - for example
emulating entire legacy SAN environments within a new dynasty SDS array...
the
segment multiplication factor of - customer experience and confidence with SSDs
Here's
a simple fact.
If a user has been deploying SSDs in their datacenter
for 5 to 10 years already - then they will have a different set of ideas about
product feature preferences and vendor profile preferences to another user who
is looking at SSDs for the first time - or who has less experience and less
confidence about SSDs.
That's even if both users are in the same user
segment - such as publishing, broadcast, health services, industrial component
manufacturing.
But longevity and familiarity also segment in many
different ways too.
An extreme case is
dark matter enterprise
users - such as web entities, and
cloud infrastructure
providers. These are not going to buy the same kinds of SSDs as the banks,
pharma companies etc
The dark matter users don't want to pay for your
wraparound software bundles. They're probably got their own favored apps, and
they also have their own flash APIs too. You'll have to modify your SSDs to sell
to them. Are they worth it?
If a single such customer has the ability
to buy a million SSDs or 10,000 AFA / SDS racks - then they're already bigger -
as a market opportunity - than many of the other segments which product
marketers traditionally place in their spreadsheets. So I would argue yes.
Because even if you choose not to supply them with your products -
then the consequences of the products they do buy - and the vendors they do
buy from - may come back and compete with you in other markets.
the
segment multiplication factor of RAS
Let's look at a 1U SSD.
It's got good capacity, and good speeds.
Vendor X - offers this SSD as
a lightly populated base unit. The user can add capacity by plugging in 2.5"
modules. If anything fails - the module can be replaced.
Vendor X's
ideal customer is the small user who is new to SSDs. He regards the cost of
module as high enough already.
Vendor Y - offers his 1U rackmount a
sealed unit. When it fails you replace the whole thing.
Vendor Y's
ideal customer needs hundreds or thousands of these SSD racks. Ideally at the
lowest cost possible.
The product offerings from these 2 vendors will
be accompanied by entirely different assumptions about the fault tolerance
architecture and serviceability and upgrade procedures.
They're
extreme cases. But actually there are many different opportunities to segment
RAS at each end of the user spectrum and between these hard limits too.
Why
should a customer pay for a flexible module upgrade option they don't need?
Why
should a customer pay for your version of
RAID inside the box, or
your version of cloning boxes - when they have another way of doing it? (Which
for them is cheaper or better - because it's in their own control?
Features
which are regarded as good by vendor X are bad for vendor Y's market - because
they simply add to the cost.
the segment multiplication factor of
design stability
In my article -
playing the
enterprise SSD box riddle game - I parodied how frustrating the user
experience can be - when it comes to anticipating successive product generations
and feature sets of rackmount SSDs - even when narrowing the scope down to being
a single supplier.
Marketers in the enterprise flash array market like
to believe that when they introduce new features into successive product
generations these will be regarded as
having some value by
potential customers. But that's
not always true.
In
fact the opposite can be the case - particularly when the customers are
integrators and systems builders in the embedded industrial markets.
Just
as having a stable BOM
to reduce the cost of requalification and redesign is a desirable service
offering for drive level
industrial SSDs
- so too can having a stable - no frills design - for industrial systems users
too.
Having an absolute minimum of integrated software can be seen as
a good thing - by industrial users - because they don't want to get involved in
supporting software features which they don't use, and they don't want version
related features in successive products to negatively impact performance or
other interactions.
One vendor I spoke to summed it up neatly.
Charles Tsai at
Innodisk said - when
talking about his company's
FlexiArray
(a 1U rackmount SSD for embedded markets) - "The customer really just wants
a faster SSD with more capacity."
They're buying a rack - because they can't get the features they want
in a single SSD drive or module. But for this kind of customer the rack is
simply viewed as a component.
When the same customer comes back next
month, next year or in several years - they want to be assured that the new rack
will behave in the same way. It may have more capacity, or be faster, or be
cheaper - but the new models must work with exactly the same software as the
first model - and it mustn't introduce any new software support factors (or
power supply costs either).
more segment multipliers - for
rackmount SSD?
No doubt there are more which some of you know
about, and there are some more which I may add later.
This article
started in response to some conversations I had with readers when I realized
I hadn't written much about it.
The bottom line is this.
Experienced
marketers - who have been involved in the enterprise and mission critical
rackmount SSD markets for a long time - are regularly discovering from
customer anecdotes that there are many segments for SSD arrays which are not
satisfied by standard enterprise market models.
In fact many of the
missing market segments, use cases and product classifications don't even have
standardized names.
There are many factors behind these missing
segments. But the most important cause is that users understand enough about
what SSDs could do for them to recognize that they need a different type of SSD
solution to economically satisfy their needs - compared to the generic "enterprise
SSD" which is offered by most vendors.
On the downside - vendors
who fail to be sensitive to the growing divergence of focused SSD needs in the
enterprise will find business conditions increasingly difficult due to the
mismatch between the wide span of genuine customer needs compared to the
limited scope of their own product solution offerings.
On the upside -
traditional market models for rackmount SSDs - extrapolated from the pre SSD era
- understate the potential for the SSD array market - both in terms of
applications and revenue. | | |
. |
Here are some earlier articles on related enterprise flash array
architecture and market models:-
- could enterprise SSDs
become a $10 Billion Market? - (2003) - This was a revelatory market
size estimate which extrapolated from existing technical concepts and
gave many SSD company founders the confidence to imagine a much bigger
future for their market.
- this way to the
petabyte SSD (2010) - This visionary article for the 2016-19 timeframe
enumerated a new user value proposition (along with associated technologies
and new product types) which it claimed would enable solid state storage to
move into a new role of
displacing hard
drives from datacenter arrays - despite lower HDD prices.
SSD
company founders later told me that a key part of the analysis which affected
their thinking wasn't the flash technology assumptions (some of which were too
pessimistic) but my
market
boundary analysis assumption that the transition to all solid state
arrays would happen even if
hard drive costs shrank
to zero dollars.
- Market Trends in
the Rackmount SSD Market - (2009) - analysis included recognition that
different users would sustainably support markedly different designs and
implementations of SSD racks for identical applications - due to users having
differing interpretations, tolerance and needs related to SSD technology and
differing user perceptions of SSD market risk factors
- what do I
need to know about any new rackmount SSD? - (2012) - a plea to vendors to
signpost their marketing communications about new products more effectively so
that readers didn't have to waste so much time filtering themselves in or out
of follow up reading.
- an introduction to
enterprise SSD silos - (2012) - 7 ways to classify where all SSDs will fit
in the pure SSD datacenter by form and function. This is a top level
architecture and applications usage based segmentation model.
- The big market
impact of SSD dark matter - (2012) - This article provided a narrative for
vendors to think about better ways of communicating with a new type of customer
segment which was had been emerging in the lead up to publication - SSD
superusers - many of which are the biggest and most technically savvy users of
SSDs worldwide.
|
. |

| |
.... |
 |
Are you sure this is the entry
level model?
Yup! - It's designed to fill a big gap in the market.
|
. |
|
. |
The main disadvantage of
arrays of small architecture SSDs used to be intrinsically lower utilization
efficiency.
This was because the controller is unaware of what is happening in
flash chips in other SSDs which are outside its own reach but within the same
array. An emerging trend in software pioneered by webscale entities (such as
Baidu) has been to switch off some of the flash management done in the SSD's
controller and. by using custom application specific APIs to manage some key
functions from the applications server. These techniques can reduce the original
gaps in efficiency between small and large controller arrays. |
90% of
the enterprise SSD companies which you know have no good reasons to survive | | |
. |
 |
. |
Every year I learn two new
important new ideas about SSDs. But every year I also
have to remember to forget or discard one much-cherished classic wisdom -
which was vital to know before but - which is no longer useful, valid or true. |
| | |
. |
On a particularly bad SSD
data day you may be inclined to ask yourself:-
- what do I really know about the SSD market?
- what are my safe assumptions?
- in the event of major conflicts of opinion, market data and
differences of interpretation - who can I trust?
- how do I decide which way to go from here?
- and - in extremis...
Is it time to update my profile on
linkedin?
|
Can you
trust SSD market data? | | |
. |
|
. |
"At the technology
level, the systems we are building through continued evolution are not advancing
fast enough to keep up with new workloads and use cases. The reality is that
the machines we have today were architected 5 years ago, and ML/DL/AI uses in
business are just coming to light,
so the
industry missed a need." |
From the blog -
Envisioning
Memory Centric Architecture by Robert Hormuth,
VP/Fellow and Server CTO - Dell
EMC (January 26, 2017) | | |
after AFAs -
what's the next box?
a winter's tale
of SSD market influences
controllernomics and
risk reward with big memory flash tiered as RAM
where are we
heading with memory intensive systems and software?
|
. |
re - Decloaking hidden
segments in the enterprise for rackmount SSDs
comments from Woody Hutsell,
IBM (July 2014) |
Zsolt
As usual, your post is very
insightful and in a few words captures much of the market inflection points. I
also liked your brief history.
Here is some of how I analyze this
market from what I think is a customer centric point of view:
1.
Is the customer or the architecture server centric or storage network centric?
Is reliability driven by the application through scale-out or by high
reliability shared storage.
Some shared storage devices play in both
spaces, but rarely do DAS/PCI devices play in the shared storage space.
2.
Does the customer prefer storage services in the server/application layer or in
the storage layer?
This could be driven by vendor lock-in concerns,
service costs, performance or reliability.
3. Is the
application/customer latency sensitive, IOPS sensitive or bandwidth sensitive?
4. Is the customer more performance sensitive, cost sensitive or
risk sensitive?
Anyway, mesh all this together and you realize as
you accurately conclude, that one product will not meet all requirements and
that a full product line needs to hit different feature points.
| | |
. |
Looking back on the past 15
years or so of the enterprise SSD market you could say that SSD marketers had
it easy.
As long as each new product was faster, denser, cheaper and more
reliable than the one before - and came attached with the right interfaces -
their job was mostly done - because that satisfied the needs of the market.
|
what do
enterprise SSD users want? | | |
. |
 |
. |
|
. |
|
. |
In the modern era of SSDs -
the customer has received their education about what an SSD is - and what it can
do - from many sources.
So when they talk to a vendor - the customer
says - don't tell me about SSDs.
Tell me instead how you fit into my
idea of the SSDs I'm looking for - and why I should buy from you - instead of
all these others. |
re-imagining the
enterprise customer | | |
. |
|
. |
there's an
industry-wide consensus that DWPD ratings should somehow map into recognizable
application zones and price bands |
what's the state of DWPD? | | |
. |
 |
. |
 |
. |
I often say to enterprise
SSD marketers - it's easy to create a list of the top 10 oems or user sites
which already use SSDs - but no one's got more than a small fraction of the list
of future SSD user heavyweights - because they don't exist yet - or if they do -
they're in stealth mode. They can see us - however.
|
The big market impact of
SSD dark matter | | |
. |
One of the most potentially
rewarding market challenges which SSD companies are grappling with right now is
- how to make enterprise solid state storage attractive to users who aren't
worried about their hard drive performance and don't even think they need SSDs.
|
better thinking
inside the box | | |
. |
"In the past,
enterprise hardware has had a pretty hands-off relationship with the vendor that
sells it and the development team that builds it once its been sold. The result
is that systems evolve slowly, and must be built for the general case, with
little understanding of the actual workloads that run on them." |
Andy Warfield , cofounder
and CTO - Coho Data
- in his blog -
Facebook
as a file system - a web scale case study (October 9, 2014) | | |
. |
the problem of being
perceived as a "new" supplier in an old suppliers' market is something
I discussed with Skyera's founder in 2013 |
scary Skyera | | | |