this is the home page of
leading the way to the new storage frontier .....
the fastest SSDs - click to read article
the fastest SSDs ..
image shows megabyte waving the winners trophy - there are over 200 SSD oems - which ones matter? - click to read article
top SSD companies ..
image shows Megabyte reading a scroll - the top 50 SSD articles and recommended SSD blogs
more SSD articles ..
SSDs over 163  current & past oems profiled
SSD news ..

what's in a number?

a new shorthand to describe any SSD accelerated server

by Zsolt Kerekes, editor - March 5, 2014
In this article I propose a new shorthand terminology which can usefully describe any enterprise server in an SSD architecture - from an SSD software latency envelope point of view - by a single rating number - from 0 to 7.

The need for a precise but efficient way to describe the performance of any server (in a cluster, array, farm or cloud) from an SSD software operational context - has become clear in recent conversations I've had about clusters and groups of SSD enhanced servers - when trying to define briefly and concisely what the exact minimum characteristics of each server has to be to support various software defined configurations.

When you're having a conversation - about complex SSD configurations on the web - or in an email - or in a voice conference - it's important to be sure that everyone has the same mental picture of what's going on.

With existing terminology - there are important factors which become cumbersome when building up layers of these concepts. Words get in the way. Pictures can help. But they become messy too - as you scale up the numbers of servers which you're talking about.

This article will propose a simple scheme which enables the essential characteristics of an SSD enhanced server to be communicated in a simple manner. It's aimed at architects who need to specify the assumptions they're making about servers in the base sets of new system configurations. Hopefully it will be useful to marketers and users too - when they have dialogs about the entry level configuration assumptions for new software products and projects.

The SSD market is good at this.

We've already invented lots of SSD jargon.

Without the jargon - it's impossible to build on new concepts.

The new server language is easy and doesn't require any knowledge about flash technology or controllers.

It has 2 variations:-
  • a lean description - which says whether a function is available or not, and
  • a rich description - which includes more detail - which may be needed in complex high availability groups
The language also extends to include boot options and fabrics.

The starting principle is to condense the essential characteristics of any server into a number - which is extracted from a matrix of key characteristics.

The example below gives you the basic idea of the lean model.

There are 3 main columns which define the main types of SSDs which may or may not be in the system, differentiated by latency and market characteristics:-
  • memory channel SSD - low latency fast flash or other nvrm which fits into a DDR3, DD4 or HMC socket and uses the RAM interface for non DRAM memory. (This includes storage class memories such as Memory Channel Storage, Optane and Memory1 but excludes flash backed DRAMs - aka hybrid DIMMs.)
  • PCIe SSDs - any type of fast PCIe SSD. The fabric and boot options go in a different place.
  • SAS SSDs and SATA SSDs - these are both in the same column - because from the apps architecture point of view - they have similar connections and latency. Differences in porting and HA options will go into a different part of the label.
SSDs inside this server?

0 - means no, 1 - means yes
SSDserver type
memory channel PCIe SAS / SATA lean rating
0 0 0 0
0 0 1 1
0 1 0 2
0 1 1 3
1 0 0 4
1 0 1 5
1 1 0 6
1 1 1 7
what do these lean SSDserver numbers mean?
lean rating how to read it
0 This server has no SSDs installed in the usual places.

It maybe only has a hard drive, or maybe boots from the network or another type of (slower) SSD - like USB. A type "0" server can play a part in some HA SSD configurations, as we'll see later.
1 This server has SATA or SAS SSDs installed.

The software architect can depend on an entry level class of apps performance
2 This server has PCIe SSDs installed.

From this part of the dscription we don't know what it boots from. But the software architect can rely on a PCIe SSD type of performance.
3 This server has both PCIe SSD and SATA/SAS SSD too.

It may be that the software architect is specifying this option because they can tier between these different types and price bands of SSD. Or it may be that the SATA/SAS SSDs are required for boot.
4 This server only has memory channel SSDs installed.

We might guess that it boots from the network or a hard drive. We can infer this server is aimed at high performance.
5 This server has memory channel and SATA/SAS SSD defined.

At this stage we may guess that the different SSDs are tiered, or maybe the slower SSD is simply there for boot and housekeeping.
6 This server has memory channel and PCIe SSD defined.

At this stage we may infer that the SSDs are tiered. It's possible that the PCIe SSDs are also part of a fabric or HA scheme. We'll confirm that in the next part of the identification scheme.
7 This server has memory channel and PCIe SSD and SATA/SAS defined.

Although this looks like an unlikely configuration - it may be that neither the memory channel nor PCIe SSDs are assumed to be bootable.
adding more details

The lean rating above - 0 to 7 - is just one part of the picture.

Generally - a higher number means - more assets installed, higher cost and more capability.

On the other hand - if your software works in a lowly rated SSDserver number - that means it will be affordable by more user budgets.

future-proofing the matrix

Suppose future computer architects create an entirely new type of bus or socket into which a different type of SSD can be installed - which is much faster say - than memory channel - and which has markedly different characteristics - what can you do?

The answer might be to add another column to the left - which means the numbers would be in the range from 0 to F (hex), instead of 0 to 7.

That makes the numbering scheme backwards compatible too.

What about the fabric?

In the market today - we have 4 general fabrics for SSD enhanced servers.
  • GbE - Ethernet
  • FC - Fibre-Channel
  • IB - Infiniband
  • PCIe - PCI Express
So we can simply append these letters to the server's lean number as in the following examples.

1/GbE (or I prefer 1/E as the "Gb" is assumed by the context - is a server with SAS / SATA SSD specified which uses ethernet as the fabric.

A minimum configuration for a typical software defined storage cluster - might look like this:-

Cluster(1E, 1E, 0E) - is a 3 server ethernet linked cluster which includes 2 servers with SAS/SATA SSD inside plus a 3rd server with either HDD or no drives.

2/E is a server which has PCIe SSD inside - but which uses ethernet as the fabric.

2/FC is a server which has PCIe SSD inside - but which uses fibre-channel as the fabric.

You get the general idea.

how do we describe high availability?

When it comes to specifying the assumed requirements to support the software in a high availability context - we need to be more specific about the number of devices we're saying which must be present.

That's where the "rich" version of the SSDserver description comes in.

For simplicity - we can use the same type of table to construct the numbers - but instead of using binary values in each cell we populate the matrix with the minimum number which is required by the architecture definition.

What the system architect is saying here is - you can have more - but the software won't deliver the quality of services if you have less.

Here are some examples of how to create the rich SSDserver shorthand base numbers.
SSDs inside this server?

minimum number required to support system
SSDserver type
memory channel PCIe SAS / SATA rich rating
0 0 0 0
0 0 1 1
0 1 2 12
0 2 0 20
what do these rich SSDserver number examples mean?
lean rating how to read it
0 This server has no SSDs installed in the usual places.
1 This server has at least 1 SATA or SAS SSD installed.

If this is also part of a high availability configuration - you can infer something useful about the protection scheme by the fact that only a single SSD is atached to this server.
12 This server has at least 1 PCIe SSD but also 2 (or more) SATA/SAS SSD too.

Is the architect saying that this server is offering some kind of simple failover within the server at the SATA/SAS drive level? That's when you need to read the detailed architecture notes.
20 This server has at least 2 PCIe SSDs.

As 1 is enough for speed - 2 is telling you that there's something else going on in this system. Maybe the dual PCIe SSDs are supporting failover - or fabric. Time to read the detailed plan.

At one level - if all your SSD projects look the same - the suggestions in this article are simple and trivial.

But if you spend all day discussing the design options for new system architectures - or if you're planning an entirely new software package - and arguing about the merits and complexities of drawing the support line at different sets of minimum capability boundaries - then having a simple language on your whiteboard to describe the key SSD variations in your server boxes - is essential.

the next level?

The next level of abstraction is when you start with a population of SSDservers and start to add different types of SSD system software.

That's the reason you need to get the hardware level clear.

Because when you start to analyze the business and market permutations you can get by installing different "software defined functions" to different classes of SSDserver boxes (some of which are viable but some of which aren't) you need to be clear about the fundamentals.

As to what we'll be calling all those new arrays of software defined SSD enhanced servers - when they work together in tandem... and as to which ones merely emulate what has been before - and which ones are indeed a new way of doing data processing architecture...

Those debates are still to come.


If you've found any of these ideas interesting - then feel free to spread the word around and credit me and

I don't expect this to be the last word on this subject. Rather, I hope it may be another new beginning.

When you replace words with numbers in a systematic way which enables useful analysis - then complex "what-if" problems - become easier to talk about.

RAM cache ratios in flash SSDs
an introduction to enterprise SSD silos
how fast can your SSD run backwards?
Boundaries analysis in SSD market forecasting
Why size matters in SSD controller architecture
SSD utilization ratios and the enterprise software event horizon
SSD ad - click for more info
SSD article image from
Don't worry honey. I'll be home soon with the new wheels. What do you mean by? - I hope you picked something sensible...

SSD ad - click for more info

how fast can your SSD run backwards?
SSDs are complex devices and there's a lot of mysterious behavior which isn't fully revealed by benchmarks, datasheets and whitepapers.

Underlying all the important aspects of SSD behavior are 11 key asymmetries in SSD design which arise from the intrinsic technologies and architecture inside the SSD.

"The range of SSD latencies in the pure SSD datacenter of the future will vary by more than 1,000 to 1."
From the 2011 article - if Fusion-io sells more (PCIe SSDs) does that mean Violin will sell less (rackmounts)?

"The winners in SSD software could be as important for data infrastructure as Microsoft was for PCs, or Oracle was for databases, or Google was for search."
get ready for a world in which all enterprise data touches SSDs

The traditional big bang ultrafast PCIe SSD doesn't seem as impressive as it used to be and instead is viewed as a segment along a continuous performance continuum ...
what's changed in enterprise PCIe SSD?

Can you trust SSD market data?
Heck no!

"Flash wear out still presents a challenge to designers of high IOPS flash SSDs as the intrinsic effects at the cell level get worse with each new (planar) chip generation.

Although 3D nand may be the turning point at which raw intrinsic memory endurance stops worsening (or gets better) - 3D could also introduce new types of failure mechanism and R/W distrurbance sensitivities too."
SSD endurance - the forever war

"SSD is going down! - We're going down!"
Surviving SSD sudden power loss

Raw speed is no longer the same guarantee to market success for SSDs as it once used to be. But since you asked...
the Fastest SSDs

"A new generation of enterprise SSD rackmounts is breaking all the rules which previously constrained price, performance and reliability."
exciting new directions in rackmount SSDs

Usable versus Raw flashcapacity - isn't the full picture.

A lot of flash in SSDs is just plain Invisible.
SSD capacity - the iceberg syndrome

"...the SSD market will be bigger in revenue than the hard drive market ever was."
How will hard drives fare in an SSD world?

"You can't see them - but the gravitational pull from these massive SSD users is changing markets."
The big market impact of SSD dark matter

Creating the climate for DIMM wars

The DRAM market's new clothes had long been invisible.

But the SSD market was too preoccupied with lower hanging fruit in storage.

Now the secret is out and the effect of DIMM wars will brutal and swift and erode decades of collective wisdom about the shape of next generation memory.
latency loving reasons for fading out DRAM

tiering between memory systems layers
Editor:- December 8, 2016 - A new blog - Flash Tiering: the Future of Hyper Converged - by Adam Zagorski, Marketing at Enmotus - discusses how hyper-converged infrastructure has evolved along with the associated impacts from data path latency and CPU overhead. Among other things Adam notes that...

"Very soon we’ll have HCI clusters with several tiers of storage. In-memory databases, NVDIMM memory extensions and NVRamdisks, primary NVMe ultrafast SSD storage and secondary bulk storage (initially HDD but giving way beginning in 2017 to SSDs) will all be shareable across nodes. Auto-tiering needs a good auto-tiering approach to be efficient, or else the overhead will eat up performance." the article

See also:- where are we heading with memory intensive systems and software?

In the year 2000 no one caught a cold from the Y2K bug - but 3 things did happen which would shape enterprise server performance for the next 16 years.

1 - hard drives reached the latency limit set by waiting for a 15K RPM rotating platter (they never got faster)

2 - 64 bit processor clock speeds reached almost their maximum clock speeds (they got more cores and the pressure was for cooler not faster)

3 - in the RAM market - the fastest server motherboard memory latencies in 2000 were similar to what they are today (in 2016)

We all know that SSDs came to the rescue of latency constrained advances in computing which had been stalled by (1) and (2) above.

With some help from software the next target is (3).

Why's DRAM so bad?

Most of us thought that DRAM was the gold standard for latency you can rely on (unlike that cheap flipperty gibbet flash).
DRAM's indeterminate latencies and the virtual memory slider mix - a new blog (March 2016) from

SSD ad - click for more info

The more you study the characteristics of different SSDs - the quicker and more easily you will start to anticipate useful behavioral characteristics of any new SSD - and assimilate new SSDs in your plans.

And you'll start to recognize symptoms of "missing technical information" too.
understanding flash SSD performance characteristics and limitations - a toolkit

SSD ad - click for more info


How big will the SSD market have to be for SSDs replace to replace hard drives in the enterprise?

How will it be possible? When will it happen?
meet Ken - and the SSD event horizon