click to visit home page
leading the way to the new storage frontier .....
SSD symmetries article
SSD symmetries ..
the fastest SSDs - click to read article
the fastest SSDs ..
SSD SoCs controllers
SSD controllers ..
custom SSDs
re custom SSDs ..
SSD news since 1998
SSD news ..

SSD testing & analyzer news

how fast can your SSD run backwards?
BOM control and the mythical "standard" SSD
razzle dazzling SSD cell care and retirement plans
factors which influence and limit flash SSD performance
controllernomics and benchmarks in flash tiered as RAM
SSD ad - click for more info
more minds looking at new SSDs - key to quality

Editor:- August 29, 2017 - In a recent interview with Nimbus CEO Thomas Isakovich talking about the business thinking behind their new high capacity he told me among other things...

"One of the benefits we get from having alternative market outlets for our ExaDrive platform (the new no write limits SAS SSD) is that more minds take a close look at the design and operation (these being the flash drive partners Viking and SMART Modular). This and the higher volume of drives used will result in higher quality and more reliable SSDs compared to if we had just continued using the drives as a captive design in our own arrays."

See also:- sauce for the SSD box gander - Nimbus enters SAS SSD market

60,000 wafers scrapped in memory fab

Editor:- July 6, 2017 - Given the current memory shortages it was interesting to see a report in DIGITIMES yesterday which said that 60,000 12" wafers had been scrapped recently in a Taiwan based foundry owned by Micron.

Although that foundry makes DRAM the context of that wafers number is that it's about the same as the worldwide wafer starts (for all manufacturers) of 3D nand in a single month. The cause of the scrappage was corrected in a later story.

Reactive acquires obsolete drive emulation specialist ARRAID

Editor:- June 16, 2017 - Reactive Group today announced the acquisition of ARRAID.

Integrating proprietary FPGA-based technology and industrial grade CF cards as the media, Arraid manufactures new drive replacements for SMD, HISI, HPIB, MAC, OMTI, Pertec and XMD protocols, and several other legacy drive replacements including SCSI-1 and SCSI-2 and floppy disk drives. Legacy computer systems simply see Arraid's drives as if they are the original drives, enabling them to be field-replaced without the need for any software changes.

CPUs for use with SSDs in the Post Modernist Era of SSD

Editor:- March 22, 2017 - A new blog on - optimizing CPUs for use with SSDs in the Post Modernist Era of SSD and Memory Systems - was prompted by a question from a startup which is designing new processors for the SSD market.

What's the best way to adapt a processor design for use in the SSD market? the article

Parallel Machines gets patent for recovering distributed memory

Editor:- March 2, 2017 - Parallel Machines has been assigned a patent related to healing broken data in shared memory pools according to a news report in

See also:- data recovery, SSD data integrity, security risks in persistent memory, fault tolerant SSDs

a RAMdisk in flash?

Editor:- February 27, 2017 - The use of flash as a RAM tier was being talked about 5 years ago by Fusion-io and since then the market has got used to the idea. And since then as you know if you've seen the SSD news pages in recent times there are many different offerings in the market ranging from NVDIMMs to software which claim to work with any flash form factor.

But how good are such systems?

Well there are vendor benchmarks... but here's another way you might get insights.

A new blog on takes an unusual approach which is a probe into the future of the memory systems market.

I stuck my neck out here when I said "this may be a stupid question but... have you thought of supporting a RAM disk emulation in your new flash-as-RAM solution?" the article

is it realistic to talk about memory IOPS?

Editor:- August 11, 2016 - 24 million IOPS on a single device is the title of a recent blog from Storage Switzerland which is a briefing note about Crossbar which operates in the conerging segments of the alt nvm and DIMM wars markets.

Among other things author - George Crump - says "Crossbar believes that it can achieve 24 million IOPS on a single 4TB NV-DIMM without the use of a RAM buffer or a capacitor." the article

Editor's comments:- when startups enter new emerging markets they are often tempted to make headline grabbing claims.

And I think the "24 million IOPS" (IOPs as in you and I think about them) has to be interpreted in that context. (How can you claim record breaking IOPS when all you've got is a memory IP - and that's just part of a yet to be integrated technology set which together make IOPs.)

This is not to decry the importance and validity of the tides of change in the SCM SSD DIMM wars market - which have consumed nearly half of my working hours in the past year.

We saw similar wild claims when the startup Fusion-io was trying to get across how PCIe SSDs would change the enterprise storage market by reference to the nearest similar technology when Fusion said in 2007 it would replace SANs. (Because SAN based SSD accelerators were at that time the SSD market's dueling weapons of choice.)

Going back to Crossbar - there is a genuine problem for the industry (which I touched on in an earlier post about Diablo's DMX software) - which is - what are the most useful metrics to judge tiered memory systems by?

As we've seen in the SSD accelerated storage pool market since 2009 - there's a wide spectrum of use cases and cost considerations which have many viable business intersections.

We need new "goodness" numbers for DIMM wars memories.

But I think using IOPS to characterize a memory product is less useful to describe why people might want to look at it than wattage, raw capacity in a DIMM, uncached raw R/W latency and price.

And - most important of all - what software does it work with? And how well does the software behave?

Non-Balanced Wear Leveling - a paper by Renice

Editor:- July 28, 2016 - Renice Technology recently published a paper - Non-Balanced Wear Leveling Algorithm (pdf) - which outlines the thinking behind a specific technique in its industrial SATA3 SSD controller - model RS3502-IT - to improve endurance upto 3x compared to traditional methods.

This is one of the several techniques used in this controller to overall get a 20x improvement in lifespan when using MLC. the article (pdf)

Editor's comments:- Ever since the first flash devices were evaluated it has been known that some blocks are much better than others.

As an example in this paper Renice shows that in a modern 16GB MLC flash chip - even after just 10 P/E cycles the controller is able to see a 3x difference between the fastest and average program time and over 30% difference between the slowest and fastest read times.

The quality of wear resistance tells you something which can be used to grade blocks.

Renice's non-balanced wear leveling algorithm leverages these naturally occurring process variations so that "the higher wear resistance blocks are selected to be erased more times while the lower ones get protected instead."

Although there are no fundamentally new ideas presented in this paper - because the technique is just one permutation of many from the superset of all adaptive R/W techniques - this paper does provide a useful survey of classical wear leveling techniques along with their associated trade offs in performance and endurance.

I got a good sense of judgment and balance in this paper - given the unstated context.

Context is always important - and these techniques are discussed in the context of general purpose, simple low power industrial SSDs which use modest speed SSD processors and skinny RAM flash caches.

That's distinct thinking from new generation enterprise array controllers in which visibility into other SSDs in the same array, larger ratios of DRAM and knowledge about the applications software stack can also be leveraged to reduce endurance.

an SSD way of looking at hard drives

Editor:- May 4, 2016 - In an ironic twist of fate - it looks as though hard drive vendors may find it useful to characterize some aspects of HDDs in a way which can be easily related to value judgement numbers created for SSDs.

hard disk drives news and articles
.. hard drives
A recent article - when did HDDs get SSD-style DWPD ratings? - in The Register brings to our attention that hard drrives are now being specified with write limits and the author Chris Evans (who also blogs as Architecting IT ) conveniently provides readers with a list of HDD models along with their DWPD equivalent ratings. the article

See also:- what's the state of DWPD in SSDs?

worst case response times in DRAM arrays

Editor:- March 1, 2016 - Do you know what the worst-case real-time response of your electonic system is?

One of the interesting trends in the computer market in the past 20 years is that although general purpose enterprise servers have got better in terms of throughput - most of them are now worse when it comes to latency.

It's easy to blame the processor designers and the storage systems and those well known problems helped the SSD accelerator market grow to the level where things like PCIe SSDs and hybrid DIMMs have become part of the standard server toolset. But what about the memory?

Server memory based on DRAM isn't as good as it used to be. The details are documented in a set of papers in my blog - latency loving reasons for fading out DRAM in the virtual memory slider mix.

1mS VCC blip to 0V is enough to kill many SSDs!

editor:- January 27, 2016 - an incoming email this morning started like this...

Hi Zsolt,

Following on from our previous conversation on power problems - we've found that a significant number of SSDs will die entirely if the voltage rails are pulled to ground for (as little as) a 1mS period.

Technical sales demos often go along the line of - yes you can do that, the drive should handle it fine....... oh crap :-)

The sender was Andy Norrie, Technical Director - Quarch Technology who was continuing a conversation started 3 years earlier about problems in SSDs due to untested power loss vulnerabilities.

Andy's company designs intelligent power rail units for SSDs which can help designers verify imunity from, or sensitivity to, what-if? power rail disturbance vulnerabilities which are caused by scenarios like hot plug spikes, power up-down sensitivity, noisy generators etc.

Quarch announced today that it is becoming better known by SSD reviewers and Andy also told me - "last month we put around 75 test units into a single lab of a major SSD company."

...Later:- More detail on the exact nature of the 1mS blip test emerged from Michael Dearman, Founder of Quarch Tech who (after seeing this post) said this...

"Hi All, I have broken several drives in customer demos with the 1mS test, the key factor is that in this test we don't just stop supplying power and let the rail float (like a disconnection), we pull the rail hard down to ground (as would happen with a power supply crowbar). In this instance the drive should isolate itself from the host power supply to preserve its internal charge and complete its power down, but doesn't always manage it!"

On seeing that - Sudhir Brahma Principal Engineer at Dell said:- "Usually these glitches result in data corruption....I will be surprised if they kill the ssd. I was working on one such issue- traced it down to a firmware area (a bug) where a glitch could potentially cause loss of data in flight......u need to write intrusive code for that....true it is tough Are u saying that by doing those glitches, u actually stoned the drive? We built such glitch generators and ran them on the ssds, but never had a stoned SSD."

Michael Dearman said - "yes, we have had production drives go completely dead."

There were other interesting comments re the above post from people who have also used Quarch Tech's SSD testers - which you can see via my linkedin

MIT research findings -FPGA processing in flash arrays

Editor:- July 14, 2015 - Flash SSDs with in-situ processing in regular RAM cached servers can deliver nearly the same apps performance as fat RAM servers (but at much lower cost and lower electrical power).

That's one inference from a recent story - Cutting cost and power consumption for big data - in MIT news - which summarized a research paper at ISCA 2015 - BlueDBM: An Appliance for Big Data Analytics .

Part of the system architecture in the research included a network of FPGAs which routed data to the flash arrays and offloaded some of the application specific processing.

This is not a replacement for DRAM said Professor Arvind whose group at MIT performed the new work. But there may be many applications that can take advantage of this new style of architecture. Which companies recognize: Everybodys experimenting with different aspects of flash. Were just trying to establish another point in the design space. the article

less than 10% of FC SAN sites rely on 3rd party benchmarks

Editor:- June 9, 2015 - Load DynamiX (a storage performance testing and validation company) recently released the results of a survey (pdf) characterized by heavy users of FC SANs (71%) and 2PB or more of data (76%).

Among the findings in this set of 115 participants:-
  • over half (54%) planned to add all flash arrays to their storage assets in the next year
  • one third (34%) used custom performance scripts as part of their pre purchase and deployment evaluations
  • users were heavily reliant on their current and potential vendors for news about new products and technologies - and nearly twice as likely (83%) to rely on news from vendors compared to online magazines (44%) the article (pdf)

Benchmarking and Performance Resources

Editor:- February 6, 2015 - When it comes to SSDs - an SSD which is faster in a way that you can economically use - such as by converting faster latency into competitive dollars (trading banks) or by satisfying more virtual users with less servers (nearly everyone who owns a lot of heavily used servers) is worth looking at.

Although performance is not the only thing (and often is not even the most important thing) which makes up the cost of buying an SSD - or the justification to buy it - performance has been one of those parameters which - because it has helped to sell products - even when the numbers were unreliable or abused - has attracted a great deal of creative literary output in the SSD industry. Most of it fiction. Some of it fact.

I've written a lot of articles and emails on this theme myself. So many indeed - that I sometimes find myself in danger of writing something new - and then getting a sense of deja vu. IOPS? - I've got a feeling I wrote something like this before? A quick search confirms - yup I did. - Was it yeally that long ago? Let's just update the links so it makes sense if someone else finds it later.

It seems I am not alone in that respect. And a recent post on linkedin suggests a much better way of handling that.

The idea came from Greg Schulz, Founder of StorageIO - who has recently curated a whole bunch of articles which he's written, edited or likes into a single resource page - which he calls - Server and Storage I/O Benchmarking and Performance Resources

If you have the time - Greg has many articles on this topic which will inform and delight you.

hot topics at

Editor:- January 12, 2015 - reader interests in Q4 2014 are reflected in the most popular SSD articles list.

The big change - compared to a year ago - is that memory channel SSDs have become as hot a subject for reader research as PCIe SSDs were back in an earlier phase of the market in the first half of 2009.

Who Needs 10Gbps USB?

Editor:- October 29, 2014 - SSDs are at the forefront of the thinking in a new article today - Who Really Needs USB 3.1? by Eric Esteve.

Eric's blog sketches out a 5 years into the future application picture for this new (10Gbps) iteration of the USB connected story.

See also:- storage interface chips, market research

Tanisys enters SSD ATE market

Editor:- August 5, 2014 - Tanisys Technology today announced details of a new SSD ATE test system which will be shown at the Flash Memory Summit.

The Tanisys's SX3-OGT test system (which includes benchmarking and validation suites from OakGate Technology, supports popular SSD interfaces including PCIe, SAS and SATA.

The SX3-OGT also supports fast emerging protocols such as NVMe and AHCI. The SX3-OGT is available in bench top configuration for engineering applications and with multiple burn-in chambers for production.

And the best buy SSDs shall be the worst (if you change your workloads)

Editor:- August 2, 2014 - An applications optimized SSD system can be the cheapest buy - if you always use it for the original purpose - but it can be a poor choice if you throw the wrong type of applications at it. Enter - the good ole general purpose fast SSD array.

The conflicts are examined in a new blog - Real Flash Storage Systems multi-task! written by Woody Hutsell, IBM who among other things says - "It just so happens that flash appliances with built-in deduplication are the worst choices for database acceleration." the article

The idea that an SSD which is best for one type of use may have the worst characteristics for another - was also examined from an architectural point of view in my classic article - how fast can your SSD run backwards?

real-world performance of flash storage systems

Editor:- July 23, 2014 - Editor:- July 23, 2014 - How does flash storage perform in the real world? - Demartek aims to provide some answers by reporting on the performance tests which it has carried out on SSD and hybrid systems from many of the leading enterprise SSD companies in a session next month at the Flash Memory Summit (August 5).

Demartek says attendees will come away with reasonable estimates of what they can expect in practice and the results also reveal additional advantages of flash-based storage, with what Dennis Martin, President - Demartek calls "happy side effects".

Editor's (later) comments:- see Dennis's paper - Real-World Performance of Flash-Based Storage Systems (pdf)

say hello to CacheIO

Editor:- June 10, 2014 - CacheIO today announced results of a benchmark which is described by their collaborator Orange Silicon Valley (a telco) as - "One of the top tpm benchmark results accelerating low cost iSCSI SATA storage."

CacheIO says that the 2 million tpm benchmark on CacheIO accelerated commodity servers and storage shows that users can deploy its flash cache to accelerate their database performance without replacing or disrupting their existing servers and storage.

Editor's comments:- The only reason I mention this otherwise me-too sounding benchmark is because although I've known about CacheIO and what they've been doing with various organizations in the broadcast and telco markets for over a year - I didn't list them on before.

That was partly because they didn't want me to name the customers they were working with at that time - but also because with SSD caching companies becoming almost as numerous as tv stations on a satellite dish - I wanted to wait and see if they would be worth a repeat viewing. (And now I think they they are.)

what's in a number?

Editor:- March 4, 2014 - SSDserver rank is a latency based configuration metric - proposed as a new standard by - which can tersely classify any enterprise server - as seen from an SSD software perspective - by a single lean number rating from 0 to 7. the article

ESG reports on clustering Virident's PCIe SSDs

Editor:- January 23, 2014 - ESG today published a test report for Virident / HGST's FlashMAX II (PCIe SSDs) which validated the ability of this product family and its related software (vShare ) to be configured for useful operation in an Oracle RAC environment in high availability configurations which were clustered via Infiniband.

Although ESG said performance was good they also commented on some current limitations of the product suite for this type of application. In particular:-
  • the lack of a graphical interface for setup and performance mintoring,, and
  • the lack of support for other supported fabrics such as 10GbE (mentioned in the report as a future option), and PCIe fabric (which was not mentioned at all in this report).

how to avoid hot pluggable PCIe SSD failures

Editor:- December 3, 2013 - What happens if you test PCIe SSDs for their sensitivity to data corruption or even failure - in the event of sudden power loss?

You'd think that with more hot pluggable products coming into the market -especially in the 2.5" form factor - that the experimental outcomes would be known by the designers and problems debugged so that users wouldn't have to worry.

In August 2013 Quarch Technology launched some special test equipemnt to inject power related faults into PCIe SSDs - and the company today announced it has extended this range to automate power line error testing of PCIe SFF SSDs.

Andy Norrie, Technical Director at Quarch Technology told me today that "Almost every combination of test kit we have tried in Quarch (a number of friendly customers lent us kit and eval drives to get the new module up and running) has failed in some way. Sometimes failing to come back up again nicely, sometimes with a full BSOD which will almost certainly have risked data loss."

memory channel SSD vs PCIe SSD write latency - 3rd party benchmarks

Editor:- November 7, 2013 - If you've been wondering how Diablo's memory channel storage compares with PCIe SSDs - click here to a new whitepaper (pdf) which includes some useful data. The application isn't important but it's the first public glimpse which goes usefully beyond the graphs shown in the product launch documents.

The highlight for me is - a mean write latency of about 30µS for MCS compared to about 100µS for PCIe SSD- at a particular R/W ratio which may of course look nothing like your own setups.

Nimbus scores $40 per virtual desktop in IOmark-VDI

Editor:- October 1, 2013 - I was talking recently to Tom Isakovich, CEO and founder of Nimbus Data Systems about the state of the market and a possible date for an IPO (which you can read more about in SSD news).

After that we moved onto the subject of why enterprise customers buy SSD arrays - and we traded stories about some of the explanations which get tossed around like IOPS per dollar - which when you scrutinize them in any detail are ridiculous. We've both seen leading edge silicon SSD companies put nonsensical graphs into their marketing presentations which don't lead you anywhere useful in the real world if you follow the superficial analysis. (That's because these vendors don't make systems - and are many steps removed from genuine enterprise thinking.)

Tom said most of his customers couldn't tell you how many IOPS their apps were demanding.

I said I've been writing about the "cost of satisfying a given number of virtual users for a particular type of app" as being a useful comparison figure (for storage). We both agreed that even if enterprises don't know for sure what their throughputs or IOPS are - they have a good idea of how many users they're trying to serve within their organization or at customer facing web interfaces. The payroll tells you one, the marketing people can tell you the other. And accounting can tell you how much it all costs. You don't need storage analyzer tools to get a feeling for where the ground level lies.

As our conversation had already veered towards the subject of a simple way that users can compare the costs of SSD storage for particular types of apps - and as I'd asked the question - he said there was a benchmark called IOmark-VDI which Nimbus had participated in recently with the Evaluator Group. He said he went into the process because he thought it might be a good thing to try out - and was gratified to found out that it shows the Nimbus product in a very good light with a cost under $40 per virtual desktop (pdf) achieved by a 2U Nimbus system supporting up to 4,032 linked clone VDI images.

measuring enterprise SSD performance - intro by Micron

Editor:- August 23, 2013 - EDN today published a introductory article on the subject of measuring enterprise SSD performance - written by Doug Rollins, Senior Applications Engineer at Micron - which could be useful for newcomers to this topic as it expounds some of the basic assumptions and jargon. the article

See also:- Can you trust flash SSD specs & benchmarks?

FIO's ION software enables Breakthrough Shared Storage Performance

Editor:- June 13, 2013 - The performance of Fusion-io's ION Data Accelerator software - which you can add to its PCIe SSD cards, any standard server and some FC adapters to roll your own SAN rackmount SSD - is the point of a new blog by the company today which celebrates recent benchmarks for 2, 4 and 8 processor HP server configuartions (pdf).

Stec's profiler removes guesswork in sizing SSD caches for hybrid storage

Editor:- May 21, 2013 - Stec today announced that it's offering a free profiling tool - EnhanceIO Profiler - which can enable users to determine how much benefit they would get from using its EnhanceIO (SSD caching software) - before they even install any SSDs.

The company says that the "non-disruptive installation" can save hours of administrative trial and error by recommending the optimal block size, and the capacity and type of SSDs to be used for maximum performance gain.

SSD performance characteristics and limitations

Editor:- March 15, 2013 - published today - the new home page blog on is - a toolkit for understanding flash SSD performance characteristics and limitations.

It brings together in one place many of the tools I use every day when thinking about and assessing SSDs.

in memory database even better with FIO's flash

Editor:- November 20, 2012 - McObject recently released new benchmark results which indicate that the in-memory database company is not so unfriendly to flash SSDs as you may have thought from reading earlier company positioning papers.

It seems that a software product - which was originally designed for the DRAM-HDD world - is a good fit in the flash SSD world too - if you have the right scale of data and the right SSD. more

new article - adaptive flash care IP (including DSP)

Editor:- June 19, 2012 - A few months ago I promised readers that I would publish a tentative list of SSD companies who use what I loosely called "adaptive DSP technologies in SSD IP" in their new designs. .

It's one of the most important design techniques being used in some leading flash SSDs - in which the SSD designer can adapt the reliability, speed and power consumption of the SSD - not based on some faw away population model of flash chips - but optimized for the chips in each SSD - and adapting the controller behavior to what is measured and learned from interacting with the flash chips installed. This is a market changing technique. the article

analyzer suite could speed up auto-tiering SSD evaluations
Editor:- November 29, 2011 - hyperI/O today announced availability of its Disk I/O Ranger software analysis tool for Windows environments.

The company says this will help users diagnose and understand disk storage access performance problems and to to verify that QoS levels are being met at the application/file/device level. It could also simplify the evaluation of auto-tiering SSD appliances by collecting real-time metrics.

Editor's comments:- I asked Tom West, President of hyperI/O what he was seeing of the SSD market from his perspective of selling storage analysis tools. He said -"One of the major users of the hIOmon software is listed within the top 10 of your latest - Top 20 SSD Companies."

Microsemi reports shake rattle and roll SSD results

Editor:- May 19, 2011 - Microsemi today announced that its TRRUST-STOR (2.5" rugged SSDs) are the industry's first SSDs to pass zero-failure testing at vibration levels that are consistent with the industry's most severe environments.

"No other SSD manufacturers have published zero-failure results at this level of vibration testing, which was conducted while our drives were fully operational, reading and writing data," said Jack Bogdanski, director of marketing for Microsemi. "The ability for SSDs to perform flawlessly under adverse environmental conditions is becoming increasingly important for applications where it is critical that data be protected at all times."

Microsemi's SSD units were pre-conditioned at 85°C for 336 hours.

How and why to monitor VM Performance

Editor:- February 23, 2011 - How to Proactively Monitor VM Performance is a new article on Data Center POST written by Alex Rosemblat, Product Marketing Manager at VKernel - who says "Proactive monitoring of a virtualized data center can assist in finding potential performance problems before they occur..."

Editor's comments:- OK he says a lot more than that - and that's why I mentioned his article here.

I used to do a lot of performance analysis in my pre cut and paste career because I designed systems with guaranteed apps response times. And in my current job I always check my stats before I look at my email. So I have a lot of empathy for the storage test and analysis market. The more you understand about the internals of complex systems the less likely you are to get mugged by them. ... read the article

SandForce publishes list of approved test tool partners

Editor:- January 31, 2011 -SandForce has started a directory of companies, tools, technologies and services to help SSD designers integrate its SSD processors and get them to market more quickly.

Each member company in the new SandForce Trusted™ program ensures that their products and/or services fully support SandForce SSD Processors and provides response to SandForce customer inquiries within 24 hours while committing to high-priority support for fastest problem resolution.

Editor's comments:- 6 out of the 7 initial companies in the new program provide test / design verification products.

storage search banner

storage test equipment click for larger image Megabyte liked to keep his systems in peak condition.
DWPD - uses and limits
will SSDs end bottlenecks?
Can you trust SSD performance data?
SSD is going down!
We're going down!
Surviving sudden power loss
what's the holdup in MIL SSDs?
the upside and downside of caps
Reliability is more than just MTBF...
but unlike Quality - it's not free.
storage reliability - news & white papers
SSD ad - click for more info

You don't have to look at many different SSD company web sites before you start asking yourself how do these companies differentiate themselves and make money?
custom matters

The more you study the characteristics of different SSDs - the quicker and more easily you will start to anticipate useful behavioral characteristics of any new SSD - and assimilate new SSDs in your plans. And you'll start to recognize symptoms of "missing technical information" too. These are things which it's important for you to know - but which don't appear in the initial info you see about the new SSD.
understanding flash SSD performance characteristics and limitations - a toolkit

SSD ad - click for more info

"We are at a junction point where we have to evolve the architecture of the last 20-30 years. We can't design for a workload so huge and diverse. It's not clear what part of it runs on any one machine. How do you know what to optimize? Past benchmarks are completely irrelevant.
Kushagra Vaid, Distinguished Engineer, Azure Infrastructure - quoted in a blog by Rambus - Designing new memory tiers for the data center (February 21, 2017)

"Our past work showed that application-unaware design of memory controllers, and in particular memory scheduling algorithms, leads to uncontrolled interference of applications in the memory system" - said Onur Mutlu, Carnegie Mellon University.
Are you ready to rethink RAM?

SSD ad - click for more info

The paradox for the potential use - is that 2 different flash SSD designs which have apparently identical throughput, latency and random IOPS specs in their datasheets - can behave in a way that is orders of magnitude different in real applications.
the Problem with Write IOPS in flash SSDs


"Knowing whether a flash SSD is skinny, regular or fat is helpful, because each set has common modalities."
RAM Cache Ratios in flash SSDs


Locality and latency are well understood related concepts in the design of tiered memory and caches. Before 2016 business models in the SSD market had something similar.

That's no longer true.
1 big market lesson in SSD year 2016


Bad block management in flash SSDs
This is an introduction to the thinking behind one of the many vital functions inside a flash SSD controller.

Native media defect quality in new flash memory chips has grown steadily worse in the past 10 years as geometries have shrunk.
click image to read the article - principles of bad block management in flash SSDs This article enumerates the scale of the problem and explains how intrinsically dodgy flash memory is transformed into dependable flash SSDs which you can entrust with your data. the article


partial A to Z index of

1.0" SSDs
1.8" SSDs
2.5" SSDs
3.5" SSDs

3 things could've killed flash SSD market

1976 - 2011 - SSD history
2011 - SSD timeline
2012 - SSD look ahead

20K RPM HDDs - no-show

About the publisher
Acquired storage companies
After SSDs... what next?
Analysts - SSD market
Analysts - storage market
Animal Brands in the storage market
AoE storage
Architecture - network storage
Archives - storage news
Articles - SSD
Auto tiering SSDs

Backup articles - tape / D2d / optical
Backup software
Bad block management in flash SSDs
Benchmarks - SSD - can your trust them?
Best / cheapest SSD?
Big market picture of SSDs
Bookmarks from SSD leaders
Branding Strategies in the SSD market
Buyers Guide to SSDs

Cables for storage interfaces
CD, DVD and other optical storage drives
Chips - storage interface / processors
Chips - SSD on a chip & DOMs
Cloud storage - with SSD twists
Controller chips for SSDs
Cost of SSDs - why so much?

more - A to Z

STORAGEsearch is published by ACSL