this is the home page of
leading the way to the new storage frontier .....
image shows megabyte waving the winners trophy - there are over 200 SSD oems - which ones matter? - click to read article
top SSD companies ..
SSDs over 163  current & past oems profiled
SSD news ..
image shows Megabye the mouse reading scroll - click to see the top 30 solid state drive articles
SSD articles & blogs ...
SSD SoCs controllers
SSD controllers ..

Efficiency as internecine SSD competitive advantage

by Zsolt Kerekes, editor - October 2012
There's a new concept which has been coming up in a lot of the conversations I've been having about SSDs in recent weeks. It's a simple one word summary which neatly bundles up a bunch of technical and business concepts.

If you start looking at SSD companies from this angle it gives you a new empirical way in which you can spot likely winners and survivors in the SSD market.

It can also give you a fine grain reassessment of companies which - by other external measures (such as quarterly results or being listed in the Top SSD Companies List) - already appear to be doing well.

But appearances are deceptive.

A simple metric can probe beneath these external SSD business veneers- and sub-divide (even the most attractive looking set of ) today's SSD companies into 2 further classes - which are laden with portents for their future business outlooks.

I've found this new way of looking at SSD companies equally valuable whether I'm talking to someone who runs an SSD investment fund in a bank, or is the CEO, VP marketing, or CTO in an SSD company, or someone who is a seriously interested designer and user of SSDs.

And another nice thing is - it applies to all markets where SSDs are used:- consumer, industrial, military, enterprise, and it also applies to future SSD markets - SSD dark matter - which are today unknowable.

Now - if you're a regular reader of my SSD articles - you might say something like this.

"Hey Mr SSDmouse - I've heard this type of preamble from you before. Didin't you already write a couple of big new SSD idea articles earlier this year in which you promised that this would be the most important single idea about SSDs that we would have to get our heads around this year? How come you're trying to pull that same old trick again?"

To which my reply has to be...

Dear Reader - thanks for reading my previous articles and tweeting and blogging about them. You're perfectly correct in what you just said. My defense is that in each of these earlier articles it seemed that what was being discussed was indeed more important than what had been discussed before. (And I didn't get too many complaints at the time.)

If the SSD market had fossilized and stayed exactly as it was 2 or 3 quarters ago - then I agree that there would be no more need for new articles of this type.

But that didn't happen. The SSD market hasn't stayed still. and the pace of developments in the SSD market in the past year has accelerated.

10 years ago, 5 years ago and even 3 years ago - you could safely coast along through the SSD currents by absorbing one big technology idea or one new SSD business dynamic idea each year.

But don't say I didn't warn you that this not too overly strenuous SSD re-education process was about to change.

As long ago as my 2011 SSD market summary - I said there had been 3 significant new business trends.

And now in 2012 (or 2013, or 2014 or 2015 if you're reading this a bit later) there are a lot more companies doing new SSD stuff.

The overall maket is bigger which means there are more SSD market segments which have each grown big enough to support their own style of innovation and distinct set of values. That's why the new SSD big idea articles seem to be happening nearly every month.

And there's another reason too. Sometimes to understand a new high level concept - you may need to first absorb and get familiar with a bunch of lower level SSD ideas - which are part of that framework.

But I promise not to write any more articles which start out by saying - this is the most important idea about SSDs which you will read this year. (Unless the publication date is December 24th.)

OK - enough of that - let's get on with it.

A new word has been creeping into nearly every email and conversation I've been having about SSDs recently - and that's - "efficiency".

SSD efficiency is a very powerful differentiator in technology and I think it will also be very important in influencing business success too.

Let's talk about efficiency in the context of technology first.

What does efficient SSD technology look like? - and why does it matter?

Suppose you're a customer looking at 2 competing 2U rackmount SSDs for an application you've got in mind. You're going to buy hundreds of these - but you've narrowed it down to these 2 suppliers.
  • The price looks about the same.
  • Both suppliers have good reputations.
  • Both suppliers are supported equally well for the type of software environment you've got.
Which one are you going to buy?

The SSD mouse comes along with his technological screwdriver, lights up a torch inside both boxes and starts looking at what's inside.

Then he says something which surprises you.

This one uses nearly twice (2x) as many memory chips (to do the same job).


This one does the same job with a lot less raw chips and also BTW the chips are a different generation and type of MLC which is much cheaper for the vendor to use.

Why should you care?

Both boxes will cost you the same? - There's nothing else much to choose between them.

The best choice is the product which has the better efficiency. This efficiency comes from design architecture. (I'll say more about the nitty gritty details later in this article.)

Why is the most efficient SSD architecture your best choice? - given than either product works just as well and is being quoted to you at the same price....

You can infer that the vendor with the most efficient architecture
  • is much better advanced in their understanding of application specific SSDs
  • can make more money at the same price point as less efficient competitors - and therefore is less likely to need bucketfuls of VC funding - and is more likely to stay in business as a stable supplier (even if the company and its product are acquired by someone else).
  • the more efficient SSD design will use less electrical power - which means- if you use a lot of them - you'll see lower running costs and better reliability (because most of that wasted power just turns into waste heat).
How is it that one SSD system can be so much more efficient at its use of raw chips than another?

There are many reasons (and these have been discussed in earlier SSD articles).

There are many different ways to design an SSD and not everyone agrees which is best.

In earlier phases of the market these differences in approrach didn't matter so much - as long as they could deliver an SSD that could meet a performance, price and density goal. But today - the SSD market is maturing to a new level where being good at what you do may not be enough if another SSD competitor looks at the same market niche and puts their mind to doing it better.

Here are the main factors which account for the differences in efficiency.
  • Raw flash capacity can be leveraged in various ways to provide performance or reliability or both. The classical SSD architecture case was discussed in my article - the SSD capacity iceberg.
  • Adaptive R/W and DSP flash techniques enable efficiencies in both raw memory use (when using the same memory generation) and also introduce the possibility of using newer generations of memory which have intrinsically better efficiency at the raw chip level - which are not feasible using pre-DSP classical designs.
  • SSD software can introduce many additional incremental efficiencies from factors such as adaptive intelligence flow symmetry and even better management of fundamentals like endurance.

    Good SSD software can change the efficiency and effiicacy of overprovisioning, RAID-like overhead, the utilization and attrition rate of raw flash blocks and also impact the cost budget allocated to SSD processors and controllers.
Each one of these factors can contribute a raw efficiency factor which ranges from about 5% to over 40%. When you add up several of those little percentages - you start to see big differences.

In the hypothetical comparison of 2 rackmount SSDs - the example I'm using is obviously the enterprise SSD market. That's the market where you can see the biggest differencies between single competing products.

However, the only one of these bullet points which you would leave out for the industrial and consumer SSD markets - is the big versus small SSD controller architecture. That's because - due to physical size contsraints and packaging technologies - it's not yet feasible to apply large controller architecture within small form factor SSD components. That might change in another 3 to 5 years - but not soon. Nevertheless - the other raw efficiency ingredients still add up to efficiency a can't ignore concept.

In consumer SSDs - the efficiency differences add up to enable lower raw cost. (As to what color the SSD should be - you'll still have to consult the shoe event horizon marketers.)

In industrial SSDs - the efficiency differences add up to enable improvements in dimensions like - the smallest viable form factor, cost, power consumption and reliability.

The idea that architectural efficiency is a significant technological advantage isn't new to top ranking SSD management.

Many companies have known that this is something which gives them an edge for years. What's new is that recently several new factors have entered the market which can take these efficiencies to a new overwhelmingly hard to ignore level.

That covers what I want to say here today on the subject of SSD design architecture efficiency. But there's a related efficiency theme I'd like to bring in here too. That's marketing efficiency.

What do I mean by marketing efficiency?

It's closely related to classic business school meanings.

SSD vendors have to identify more accurately the market segments which they operate in - within a more complex and sophisticated SSD market landscape. Vendors have to design their marketing messages around a tighter set of value propositions and develop products which are each better optimized around a smaller set of applications.

The idea I'm getting at here is that in the past a small set of SSD designs could compete across a wide span of applications.

But now that the SSD market has grown bigger - there are clear differences emerging about where SSDs can be used and for what type of application each type of design is best suited. I outlined the different use cases and segments for enterprise SSDs in my SSD silos article.

In the past - having a small degree of product overkill wasn't a marketing handicap - because customers wouldn't complain if they were getting more than they needed once they decided that an SSD solution was in fact affordable.

In the much more competiive SSD market of today it's not unusual for customers to have a better idea of the range of products on offer than some of the vendors who are pitching to them. (And a better idea of the SSD market than some past SSD CEOs too.)

The customer isn't going to tell a vendor that the reason their SSD isn't in their purchase order is because 30% of the cost is going into overkill features and performance which the customer doesn't need. (This is an example of marketing inefficiency. The short term answer for the vendor is to find customers who do need and value the additional features. The long term answer is to match the feature set of future products more closely to what customers need.)

And the customer isn't going to tell a supplier that they know (or can guess) that your product uses 40% more chips than it needs to - to perform what they do need. (And the customer is worried about the risks to them of less than optimum future pricing, or what happens if you go bust, and the extra cost and heat from that electrical power which your design shouldn't really have.)

Vendors have to figure these things out for themselves. Then take action to align themselves better with market expectations.


In the next few years efficiency as a concept at both the SSD architecture and marketing level will become a headline subject which will make and break fortunes.
update and clarification to my SSD Efficiency article
Editor:- October 29, 2012 - The above article arose out of conversations I'd been having with business leaders in trend setting flash SSD companies. Some of these people I talk to - and their companies have designed the world's best known SSD systems and controllers. My theme (as often on these pages) was "SSD thought leadership" - and not - an entry level introduction to flash SSD design.

Maybe I should have made that context clearer in my introduction.

One regular correspendent - Robert Young whose blog - Dr. Codd Was Right - sometimes visits the topic of flash SSDs from a database angle - may have thought I'm starting to lose it - because he politely suggested that "the elephant on the coffee table? in this October editorial was - no mention of larger NAND chips sizes, and resultant block/erase block size on efficiency..."

Here's what I said.

"The point of my article was how SSD makers are different in the efficiency of their system designs - even when they have access to exactly the same pool of chips. So larger NAND chips sizes etc are irrelevant.

"What's important at the systems level is that some companies can build the same usable capacity, performance and reliability for the user's app - even when using 20, 30, 40% and even 50% less chips which come from the same memory generation as their SSD competitors who have less efficient architecture and don't have the same reliability IP or market knowledge."

storage search banner

SSD ad - click for more info

"Efficiency is important. As a rough approximation, a server in your datacenter costs as much to power and cool over 3 years as it does to buy up front. It is important to get every ounce of utility that you can out of it while it is in production."
Andy Warfield , cofounder and CTO - Coho Data - in his blog - Facebook as a file system - a web scale case study (October 9, 2014)

The cloud adapted memory systems concept raises the question:- what proportion of the raw semiconductor memory capacity ought to be usable as storage? (SSD) or usable as memory? (RAM - as in "random access memory" which operates with the software like DRAM but which could be implemented by other technologies).
after AFAs - what's next?

SSD ad - click for more info

related SSD aricles

the SSD Heresies

SSD controllers & IP

this way to the petabyte SSD

Surviving SSD sudden power loss

what do enterprise SSD users want?

Where are we now with SSD software?

how fast can your SSD run backwards?

The big market impact of SSD dark matter

flash SSD capacity - the iceberg syndrome

Adaptive R/W flash care & DSP ECC for SSDs

Why size matters in SSD controller architecture

factors which influence and limit flash SSD performance

7 SSD types are all you need in the solid state datacenter

SSD ad - click for more info

"Many All-Flash Array vendors propose deduplication and compression as the work-around for the endurance problem of flash based storage systems. When these methods are implemented "in-line" so they occur before new or updated data is initially written to the flash they can reduce or eliminate some of the data to be written to the flash module.

The problem is that effectiveness of these methods is not consistent across various data center workloads. Deduplication can directly impact performance and creates its own set of writes as hash tables are updated. Overall these techniques are certainly worth implementing in All-Flash systems but by themselves they are not enough."
George Crump, Founder - Storage Switzerland in his blog - The Cost of Over-Provisioning Flash Arrays (February 2013)

SSD ad - click for more info

flash SSD capacity - the iceberg syndrome
Have you ever wondered how the amount of flash inside a flash SSD compares to the capacity shown on the invoice?

What you see isn't always what you get.
nothing surprised the penguins - click to read  the article There can be huge variations in different designs as vendors leverage invisible internal capacity to tweak key performance and reliability parameters. the article

SSD ad - click for more info


"One petabyte of enterprise SSD could replace 10 to 50 petabytes of raw HDD storage in the enterprise - and still get all the apps running faster."
the enterprise SSD software event horizon


"When 90% of the system cost is flash you really need to understand the internal workings of flash (to drive the cost down to a new lowest level)."
Skyera in SSD news - March 27, 2014

"In the first 1,000 arrays shipped our partners delivered an astonishing $300 million in data efficiency savings to their customers"
Permabit in SSD news - September 2013

...there are additional techniques, besides compression and deduplication, that can reduce space usage significantly and thereby increase the effective capacity.

One example is - Zero-block pruning the system does not store blocks that are filled with zeroes.

This technique can be seen as an extreme case of either compression or deduplication. Also, some systems generalize this technique to avoid storing blocks that are filled with any repetitive byte pattern.
Umesh Maheshwari, Co-Founder and CTO Nimble Storage - in his blog - Understanding Storage Capacity (January 7, 2016)

Now we're seeing new trends in pricing flash arrays which don't even pretend that you can analyze and predict the benefits using technical models.
Exiting the Astrological Age of Enterprise SSD Pricing