click to visit home page
leading the way to the new storage frontier .....

SSD empowered cloud

SSD news
the Top SSD Companies
After AFAs... what's next?
Can you trust SSD market data?
introducing Memory Defined Software
the changing face of DWPD in real products
decloaking hidden segments in the enterprise
controllernomics and user risk reward with big memory "flash as RAM"
the importance of being earnest about 3DXPoint and other SSD memoryfication heresies
in-memory cache as a cloud service - beta from GridGain

Editor:- June 12, 2018 - GridGain Systems today announced the beta release and free trials of GridGain Cloud - an in-memory cache-as-a-service that allows users to rapidly deploy a distributed in-memory cache and access it using ANSI-99 SQL, key-value or REST APIs. The result is in-memory computing performance in the cloud, which can be massively scaled out and can be deployed in minutes for caching applications.

why we need cloud chips

Editor:- March 10, 2018 - "In any computer architecture, it takes a lot more energy to fetch and schedule an instruction than it does to execute that instruction" says Rado Danilak, founder and CEO - Tachyum - in his new blog - Moore's Law Is Dying - So Where Are Its Heirs? - which among other things - shows how the transactional costs of fetching instructions and data in classical processors. the article

Editor's comments:- the needs of the cloud, coupled with growing understanding between the tradeoffs between processors, memory, controller dynamics, software and energy consumption since the widespread deployment of solid state storage have been the inspiration for rethinking all the classical elements of computer architecture. Some of that thinking has been rooted in the memory space but just as significant has been a rethinking of what processors should aim to do.

Tachyum announced external funding for its Cloud Chip last month. And as with previous disruptive technologies - part of the warm up process for the market - is to educate more people about how things work now so they can better appreciate what the new technologies offer.

ioFABRIC awarded patent for latency aware software

Editor:- February 27, 2018 - ioFABRIC today announced it has been awarded a patent for an innovation in its Vicinity policy engine for creating and managing data volumes based on latency requirements.

The patent protects a method for maintaining fast response times by auto-migrating data when hardware resources are added or decommissioned, when performance degrades due to overconsumption, or application use requirements change. Vicinity can apply this policy even when a volume is spread over multiple nodes and storage devices

see also:- Latency - the SSD bookmarks, are we ready for infinitely faster RAM?

Microsoft acquires NASA's cloud hybridisor - Avere Systems

Editor:- January 3, 2018 - Microsoft today announced it has agreed to acquire Avere Systems.

Ron Bianchini, President and CEO - Avere Systems said - "When we started Avere Systems in 2008, our founding ideology was to use fast, flash-based storage in the most efficient, effective manner possible in the datacenter. Along the way, our team of file systems experts created a technology that not only optimized critical on-premises storage resources but also enabled enterprises to move mission-critical, high performance application workloads to the cloud." more from Ron Bianchini

Editor's comments:- There was a lot of deep thinking in Avere. I wish them luck in the reset and recompile chaos-sphere.

the SSD empowered cloud
after AFAs what's next? - cloud adapted memory

WekaIO compares cloud storage pools to IBM FlashSystem

Editor:- July 12, 2017 -WekaIO - a cloud storage software company today emerged from stealth and announced details of its cloud-native scalable file system which the company says can deliver performance comparable to rackmount SSDs / AFAs.

Editor's comments:- The notable thing for me in this announcement was that WekaIO uses a performance benchmark compared against an IBM FlashSystem 900 (the decendant of the RamSan world's fastest storage systems from TMS.)

WekaIO says "Utilizing only 120 cloud compute instances with locally attached storage, WekaIO completed 1,000 simultaneous software builds compared to 240 on IBM's high-end FlashSystem 900. The WekaIO software utilized only 5% of the AWS compute instance resources, leaving 95% available to run customer applications."

That's an ambitious positioning statement and offers users a glimpse into the kind of performance they can get by using flash assisted cloud services. Like other modern SSD fabric software software - "WekaIO eliminates bottlenecks and storage silos by aggregating local SSDs inside the servers into one logical pool, which is then presented as a single namespace to the host applications."

Walmart generates 2.5PB of analyzable data every hour

Editor:- April 3, 2017 - Walmart's Data Café is a private cloud which supports business decision makers in its 20,000 stores who can access over 200 streams of internal and external data, including 40 petabytes of recent transactional data, which can be modelled, manipulated and visualized.

I learned the above stats in a new case study - Big Data At Walmart: How 40+ Petabytes Improves Retail Decision-Making by business author Bernard Marr who tells us how teams from any part of the business are invited to bring their problems to the analytics experts and then see a solution appear before their eyes on the nerve centre's touch screen "smart boards". the article

if your cloud leveraged service is down - it's your fault

Editor:- March 7, 2017 - "If your business leverages AWS, and you had an outage or degraded operation during the massive AWS outage last week, you can only blame yourself" - says Erez Ofer, Partner at 83North in his new blog - No Excuses. Among other things Erez Ofer says - "What is happening now after each cloud outage is a lot of learning by businesses on how to create systems that don't go down." the article

See also:- high availability SSD stories on

cloud adapted memory systems

Editor:- January 24, 2017 - Throughout the history of the data storage market we've always expected the capacity of enterprise user memory systems to be much smaller than the capacity of all the other attached storage in the same data processing environment.

A new blog on the home page of - cloud adapted memory systems - asks (among other things) if this will always be true.

Like many of you - I've been thinking a lot about the evolution of memory technologies and data architectures in the past year. I wasn't sure when would be the best time to share my thoughts about this one. But the timing seems right now. the article

Nimbus awarded patent for non blocking backplane technology

Editor:- December 21, 2016 - Nimbus Data Systems today announced it has been granted a patent - 9,268,501 - for the non-blocking data fabric architecture which is used in its petabyte scale SSD racks.

"Conventional HDD-centric architectures employed by the majority of all-flash array vendors trap flash performance behind legacy shared bus and scale-up designs," said Thomas Isakovich, CEO and Founder. "Now patented, Nimbus Data's Parallel Memory Architecture overcomes the limitations of generic off-the-shelf servers, capturing the full performance potential of all-flash technology."

Elastifile gets patent for flash-aware adaptive cloud scale data management

Editor:- November 3, 2016 - Elastifile today announced it has been granted a US patent (No. 9,465,558) for a method of flash-native, collaborative data storage when running on multiple interconnected nodes.

Elastifile's technology (which is integrated in software solutions) is aimed at the hybrid cloud market.

The patented technology enables efficient, distributed storage across full-mesh clustered architectures in which all nodes interact with one another across multiple sites and clouds, in complex or constantly varying network conditions, and/or at a scale that may encompass thousands of diverse configurations.

"One of the greatest challenges for private and hybrid cloud data services has been ensuring consistent performance for distributed data writing, especially due to noisy and mixed environments," said Ezra Hoch, chief architect at Elastifile. "Our patented approach adaptively and efficiently manages how and where data is written, mitigating the constantly changing conditions—at cloud scale."

Maxta offers free 24TB version of its SDS software

Editor:- September 29, 2016 - Maxta today announced the general availability of a free download of its MxSP SDS software for qualified organizations in the U.S., Canada and select European countries.

Approved registrants will receive a perpetual, transferable license to a fully-featured version of the software free of charge, enabling them to configure and deploy a three-node HCI cluster with a maximum storage capacity of 24TB.

lowering cloud wattage with low DWPD SATA SSDs

Editor:- August 4, 2016 - Although it's the faster SSD products (like PCIe SSDs and memory channel SSDs) which capture the attention of readers - because they show what is possible (and after a long enough interval we see pioneering enterprise speeds becoming commonplace at lower prices as we're now seeing with M.2 SSDs) nevertheless - when it comes to where most of the SSD slots are - the workhorse of the SSD market - in arrays, webscale and cloud - is still the simple 2.5" SATA SSD.

disk writes per day in enterprise SSDs
Well, maybe not that simple - because since about 2012 we started to see subtle power optimized and mostly read oriented (low DWPD) SATA SSD product lines being introduced specifically for use in dense populations in the cloud.

It's a big market for SSD vendors and SATA SSDs are a low risk choice for users because there are so many competing companies and products that ensure continuous improvements in value and quality.

A new addition to this crowded market is the - Nytro XF1230 (pdf) - a 1.9TB capacity SSD which consumes less than 5W, is rated at 0.67 DWPD - which Seagate announced will ship to channel partners next month.

Weka.IO has raised over $32 million for its SDS cloud cookbook

Editor:- June 8, 2016 - another new name unstealthing in the software defined storage market is Weka.IO, founded in 2014, which has announced the closing of Series B funding bringing its total capital raised to over $32 million.

Editor's comments:- In various slideshares by Weka.IO cofounder and CTO - Liran Zvibel - you can see how they're progressing with their big idea of enabling clouds and enterprises to have a single software based storage solution with good performance, efficiency and scalability.

There are interesting comments here about the latency impacts of garbage collection within the D software development environment.

Zvibel says an infrequent (few times an hour) latency of 10mS for GC can become an infinite wait if the kernel is stressed on memory.

The variability of DRAM latency took many years to be widely appreciated and is what created opportunities for PCM and flash as DRAM (Micron and Diablo in SCM DIMM wars) and also big PCIe RAM fabrics likes those from A3CUBE.

what was the spark of opportunity for Weka.IO?

Liran's slideshare (from 2014) - the future of the data center - includes this comment:- "IBM, Oracle, Microsoft, SAP, Dell and the like lost their ability to shape the future of data centers."

Pure's CEO says his legacy systems competitors are 2-3 years behind in flash/cloud centric software

Editor:- May 26, 2016 - Pure Storage reported that revenue for its recent quarter was approx $140 million, up 89% from the year ago period.

In his blog which recaps business highlights CEO - Scott Dietzen - comments on the nature of the competition he sees from legacy storage companies.

He says - "In our view, refurbished mechanical disk-era designs from the last century cannot fulfill the needs of the modern data center: solid-state flash memory and cloud demand a holistic rethink. Yet the majority of FlashArray's and all of FlashBlade's competition comes from pre-cloud disk-centric retrofits..." the article

Avere ranked #1 in Google's cloud partner search list

Editor:- March 16 , 2016 - How well does Avere Systems (and its virtual edge filer) work as a gateway to Google's cloud services? Apparently very well - as Avere today announced it had been named "Google Cloud Platform Technology Partner of the Year" for 2015.

Plexistor releases its Software Defined Memory

Editor:- January 26, 2016 - Plexistor today announced the availability of its Software Defined Memory (SDM) architecture for both on-premise and cloud-based deployment on EC2 for AWS.

Plexistor has a presentation (ppt) which outlines the launch product environment and gives indicative benchmarks.

Cache latency is key to side-channel attack technique which can breach cloud server security walls

Editor:- October 29, 2015 - Cache jitter and latencies are more than simply performance quality issues - they can be the root of security vulnerabilities too.

The juxtaposition of these concepts in colocated cloud servers presents risks which were reported recently by researchers at Worcester Polytechnic Institute.

The research team used a combination of techniques to first create a virtual machine on the same Amazon cloud server as a target machine (a technique known as co-location). They then used the co-located machine to spy on the target. By observing how it accessed information in memory, they could determine when it was retrieving its RSA key. Then by charting the timing of the memory access they were able to deduce the key's actual numeric sequence. the summary

OCZ does that 3rd generation SSD firmware cloud thing (but gives it a better name)

Editor:- October 16, 2015 - It's no longer just the newcomers to the enterprise SSD market who are doing that 3rd generation / co-operative (whatever you want to call it) SSD controller firmware and host stack collaboration thing.

OCZ this week announced they're doing it too.

It's available in the Saber 1000 (2.5" cloud oriented, read mostly SSDs). And they've got a better name for it too - "Host Managed SSD Technology".

"Our new Saber HMS SSD, together with a software library and API, enable for the first time (in OCZ's product line) software orchestration of internal housekeeping tasks across large pools of SSDs, thus overcoming performance barriers that were simply not possible to address without this technology" said Oded Ilan, GM of OCZ's R&D Team in Israel.

"With HMS APIs, a host can coordinate garbage collection, log dumps, and drive geometry data" (and graphics too) in OCZ's HMS product brief (pdf)

Lite-On says small NVMe M.2 PCIe SSDs could be a good fit for datacenter

Editor:- August 6, 2015 - Lite-On today unveiled a new NVMe M.2 PCIe SSD for datacenter environments.

The EP2 series delivers R/W IOPS up to 250K /25K respectively and low latencies of 35/35 (µs). It also has power loss protection, scalability, end-to-end data protection, low power consumption, high endurance, sustained performance, and customized firmware.

Editor's comments:- in an earlier press release (in June 2015) about supplying a related product line to an unnamed customer described as "one of the largest cloud service providers" Jeffrey Chang, Lite-On's Technical Product Manager said "The M.2 is perfect for where we believe the future of enterprise SSD cloud storage is going."

SuperCloud rebuilds RAID 20x faster with CoreRise PCIe SSD

Editor:- July 3, 2015 - CoreRise today noted some record breaking performance results from one of its customers - SuperCloud (a well known Chinese cloud server manufacture) based on a configuration with CoreRise's PCIe SSDs in a 4U server with 2x 56Gbs InfiniBand ports.

Among other things SuperCloud said its lab results showed that RAID rebuilding was 20x faster than without the SSD - using a RAID5 configuration of 6D+1P. While RAID throughput was 10 to 14GB/s and 1 to 1.5 million 4KB IOPS.

bath tub curve is not the most useful way of thinking about PCIe SSD failures

Editor:- June 15, 2015 - A recently published research study - Large-Scale Study of Flash Memory Failures in the Field (pdf) - which analyzed failure rates of PCIe SSDs used in Facebook's infrastructure over a 4 year period - yields some very useful insights into the user experience of large populations of enterprise flash.
  • Read disturbance errors - seem to very well managed in the enterprise SSDs studied.

    The authors said they "did not observe a statistically significant difference in the failure rate between SSDs that have read the most amount of data versus those that have read the least amount of data."
  • Higher operational temperatures mostly led to increased failure rates, but the effect was more pronounced for SSDs which didn't use aggressive data throttling techniques - which could prevent runaway temperatures due to throttling back their write performance.
  • More data written by the hosts to the SSDs over time - mostly resulted in more failures - but the authors noted that in some of the platforms studied - more data written resulted in lower failure rates.

    This was attributed to the fact some SSD software implementations work better at reducing write amplification when they are exposed to more workload patterns.
  • Unlike the classic bathtub curve failure model which applies to hard drives - SSDs can be characterized as having early an warning phase - which comes before an early failure weed out phase of the worst drives in the population and which precedes the onset of predicted endurance based wearout.

    In this aspect - a small percentage of rogue SSDs account for a disproportionately high percentage of the total data errors in the population.
The report contains plenty of raw data and graphs which can be a valuable resource for SSD designers and software writers to help them understand how they can tailor their efforts towards achieving more reliable operation. the article (pdf) See also:- SSD Reliability

Caringo gets patent for adaptive power conservation in SDS pools

image shows software factory - click to see storage software directory
SSD software
Editor:- May 19, 2015 - Caringo today announced it has obtained a US patent for adaptive power conservation in storage clusters. The patented technology underpins its Darkive storage management service which (since its introduction in 2010) actively manages the electrical power load of its server based storage pools according to anticipated needs.

"The access patterns and retention requirements for enterprise data have changed considerably over the last few years to a store-everything, always accessible approach and storage must adapt," said Adrian J Herrera, Caringo VP of Marketing. "We developed Darkive to help organizations of any size extract every bit and watt of value while keeping their data searchable, accessible, and protected."

See also:- petabyte SSDs, the big market impact of SSD dark matter

another design win for Seagate's Nytro in China cloud market

Editor:- March 12, 2015 - QingCloud mentioned high capacity and low cost among the reasons for selecting Seagate's XP6209 (pdf) (PCIe SSD) as components to build the low latency SSD infrastructure of its cloud services for the China market - in a press release today.
click here for storage market research directory
market research

Editor's comments:- who are the new cloud companies in China?

Meet China's Cloud Innovators - a blog by Charlie Dai, Principal Analyst - Forrester Research

See also:- the big market impact of SSD dark matter

the Top 10 Hyperscale Sites

Editor:- December 10, 2014 - IT Brand Pulse recently published a new list - the Top 10 Hyperscale Sites - measured by square footage on its publication - World's Top Data Centers . So you won't be surprised IT Brand Pulse says - "These hyperscale sites are some of the biggest data centers around the world..." the article

70% of raw enterprise storage capacity will be in hyperscale datacenters by 2016

Editor:- December 9, 2014 - In its Datacenter Predictions for 2015 press release today - IDC says - "By 2016, hyperscale datacenters will house more than 50% of raw compute capacity and 70% of raw storage capacity worldwide, becoming the primary consumers/adopters of new compute and storage technologies."

Efficiency is important for web scale users - says Coho

Editor:- October 9, 2014 - Facebook as a file system - a web scale case study - a new blog by Andy Warfield , cofounder and CTO - Coho Data - made very interesting reading for me - as much for revealing the authoritative approach taken in Andy's systematic analysis - as for the object of his discussion (Facebook's storage architecture).

It reveals useful insights into the architectural thinking and value judgments of Coho's technology - and is not simply another retelling of the Facebook infrastructure story.

When you read it you may get different things out of it - because it's rich in raw enterprise ideas related to architecture, software, and dark matter users. All of which makes it hard to pick out any single quote. But here are 2.
  • re - the miss match between enterprise products and user needs

    Andy Warfield says - "In the past, enterprise hardware has had a pretty hands-off relationship with the vendor that sells it and the development team that builds it once it's been sold. The result is that systems evolve slowly, and must be built for the general case, with little understanding of the actual workloads that run on them."
  • re efficiency and utilization

    Andy Warfield says - "Efficiency is important. As a rough approximation, a server in your datacenter costs as much to power and cool over 3 years as it does to buy up front. It is important to get every ounce of utility that you can out of it while it is in production."
There are many more I could have chosen. ... read the article

WORM hard drives - now a reality

Editor:- August 20, 2014 - From time to time I get an email from a new (to me) company which really grabs my attention. Here's one such which arrived this morning.

"We now have the WORM hard disk you refer to in your article in (Introducing WORM Hard Disk Drives - February 28, 2005).

"It was developed for the Department of Justice, and is now in use, by GreenTec-USA, Inc. in conjunction with Seagate. Can we send you some information? Would love to hear from you!" - Bob Waligunda, VP of Sales at GreenTec-USA.

Editor's comments:- I haven't spoken to Bob yet - because of the time difference. But here's some info I got from GreenTec's web site:-
  • GreenTec WORM whitepaper (pdf) - "Organizations today have demanding needs to ensure that their sensitive data is protected. Considerable damage could be done if critical or sensitive files are deleted or altered either accidentally or intentionally"
The interesting thing for me is it shows that innovation in the hard drive market hasn't stopped completely. And GreenTec's 3TB (for now) WORM drives are also available as arrays in micro cloud blocks.

I had almost forgotten about my 9 year old WORM HDD (market needs this) article. I'll update it later with this note.

Linking this back to SSDs - there have been several companies in recent quarters who have announced physical write-disable switches into embedded SSDs - including:- See also:- SSD security, military SSDs

Decloaking hidden segments in the enterprise

Editor:- May 28, 2014 - today published a new article - Decloaking hidden segments in the enterprise for rackmount SSDs

Who's got your keys?

Editor:- April 5, 2014 - "Think about it" says Chandar Venkataraman, Chief Product Officer, Druva - "If your service provider has access to your encryption keys, can you really say that your data is secure?"

That's just one of the thought provoking ideas in his new blog - 5 Things You Didn't Know About the Cloud

See also:- SSD security, SSD enterprise software

Coho Data now shipping 2U MicroArray hybrids

Editor:- March 6, 2014 - Coho Data today announced general availability of its first product - a 2U SSD ASAP called the DataStream (an SSDserver 4/E) - which integrates PCIe SSDs, hard drives and a server into a web scale expandable unit (using an internal 52 port 10GbE fabric switch) to implement what the company refers to as a "MicroArray" designed with the philosophy of "Turning Tiering Upside Down (pdf)" to deliver a base building block unit of 180K IOPS performance (4KB).

Editor's comments:- you may judge for yourself the lofty scale of Coho's ambitions by this market soothsayer quote which they integrated in the launch press release - "By 2017, Web-scale IT will be an architectural approach found operating in 50% of Global 2,000 enterprises."

See also:- SSD hybrid arrays, meet Ken - and the enterprise SSD software event horizon

Atlantis provides more evidence of the trend towards massively improved enterprise utilization enabled by SSD-aware software

Editor:- February 11, 2014 - Atlantis Computing today announced that the new "In-Memory Storage Technology" release of its storage virtualization software - called Atlantis ILIO USX - can significantly increase enterprise utilization by enabling users to deploy up to 5x more VMs on their existing storage.

See also:- ILIO USX faqs (pdf), enterprise utilization and the SSD event horizon, SSD ASAPs, SSD software

Permabit has shrunk data storage market by $300 million already

Editor:- September 30, 2013 - Permabit today announced that its flash and hard disk customers have shipped more than 1,000 arrays running its Albireo (dedupe, compression and efficient RAID) software in the past 6 months.

"We estimate that our partners have delivered an astonishing $300 million in data efficiency savings to their customers" said Tom Cook, CEO of Permabit who anticipates license shipments to double in the next 6 months.

See also:- SSD efficiency, new RAID in SSDs, SSD software

SolidFire - as an anti-jitter service in the cloud

Editor:- August 19, 2013 - SolidFire provides the underlying rackmount SSD support for a new SSD empowered cloud product Platform as a Service (PaaS) being offered by IT Solutions Now which I learned about in a blog by Sorab Ghaswalla on Software Tools Journal.

Editor's comments:- cloud companies - like the stars in the sky - are nearly numberless - however if you want to see a partial list of who they are - SolidFire's news page is cluttered with the names of cloud companies - and reads almost like a set of audited customer accounts than a technology news forum - which can be off-putting - if like me - you're looking for SSD content - rather than SSD investment fodder.

But although I couldn't find any mention of this particular story on my brief visit to their website this time around - I was reminded about an interesting observation which SolidFire had written about earlier (in February 2013) regarding the performance and QoS impacts that "Noisy Neighbors" can create in a shared storage infrastructure.
Their leading theme is cloud service providers - but this issue is also critical to almost any realistic deployments in an enterprise context - and is the implicit reason that many architects have preferred to isolate critical apps servers in the past - even within their own datacenters - rather than risk mixing them all up in pools.

In a cartoon (they call it an "infographic") - Noisy Neighbors in the Cloud (pdf) - SolidFire captures the essence of this performance randomizing problem - whose solution (you guessed it) is to use more (of their) SSDs.
... noisy neighbor graphic by SolidFire
See also:- bottlenecks and SSDs, can you trust SSD performance benchmarks?, SSD scalabilities and symmetries

AI in the cloud needs SSDs

September 28, 2012 - "Consumer products are moving more and more towards that touch of artificial intelligence and in particular speaking to your devices and having your voice sent off to the cloud, recognised and analysed on good computers there and transmitted back" - said Steve Wozniak Chief Scientist at Fusion-io in the interview / article - Data deluge - the need for speed

Amazon offers explicit SSD performance in the cloud

Editor:- July 19, 2012 - There are many ways SSDs can be used inside classic cloud storage services infrastructure:- to keep things running smoothly (even out IOPS), reduce running costs etc.

Amazon Web Services recently launched a new high(er) IOPS instance type for developers who explicitly want to access SSD like performance.

In 3 to 5 years time all enterprise storage infastucture will be solid state - but due to economic necessities it will still be segmented into different types by speed and function - as I described in my SSD silos article - even when it's all solid state.

I predict that when that happens - AWS's marketers may choose to describe its lowest speed storage as "HDD like" - even when it's SSD - in order to convey to customers what it's about. It takes a long time for people to let go of old ideas. Remember Virtual Tape Libraries?
cloud storage and online backup on
Spellerbyte's amazing magic carpet ride.

"What scares me is when companies fall into the trap of trying to architect a single application to work across multiple different cloud providers. I understand why engineers are attracted to this.... Unfortunately, this effort eats into the productivity gains that compelled the organization to the cloud in the first place."
Stephen Orban, Global Head of Enterprise Strategy at Amazon Web Services in his blog - 3 Myths about Hybrid Architectures Using the Cloud (March 5, 2015)

SSD ad - click for more info

Nutanix says Pure's CEO doesn't understand the disruptive magnitude of hyperconvergence
Editor:- December 6, 2016 - A new blog - Pure CEO disses Nutanix. OK, let's compare numbers - by Steve Kaplan, VP of Client Strategy - Nutanix starts out with the idea of comparing the revenue and customer acquisition metrics of Nutanix to another well known company founded in the same year - Pure Storage - whose financial reporting periods are the same.

But the blog quickly repositions to a market analysis of enterprise architecture generations and customer segmentation preferences (both of which are often poorly understood in the industry by senior managers in SSD companies).

Among other things Steve says...

"Pure Storage's CEO, Scott Dietzen isn't the only one at the storage manufacturer who doesn't grasp the magnitude of the disruption Nutanix has brought to the datacenter." the article

related reading:-
  • the SSD heresies - "Nowhere else in computer architecture will you get so many industry experts disagreeing on such fundamental questions."
  • Can you trust SSD market data? - "These are the 5 reasons why things go wrong in the SSD market data collection, interpretation, modeling and analysis business."


this way to the petabyte SSD
the Survivor Guide to Enterprise SSDs
The big market impact of SSD dark matter
What Killed The Storage Service Providers?
Introducing the concept of RAMClouds (pdf)
anatomy of a stalled online backup company acquisition
Flash SSDs replace HDDs at Amazon, Facebook, Dropbox
7 ways to classify where all SSDs will fit in the SSD datacenter
is data remanence in NVDIMMs a new risk factor?
maybe the risk was already there before with DRAM
"High-performance SSD-backed storage is becoming table stakes for a growing number of cloud providers."
Google Cloud tests out fast, high I/O SSD drives - by Barbara Darrow, Senior Writer at GigaOM (June 5, 2014)
At the time of writing this blog in 2012 I was thinking of the web scale companies, the cloud infrastructuralists, and the real time analytics jocks who would change retail, advertising, intelligence and all kinds of data upcycling leveraged activities which previously had been technically impossible to monetize because data processing was too slow and the reach of memory-like latencies were too small.
the big market impact of SSD dark matter
It's by no means inevitable that the biggest memory companies will go on to become the biggest SSD companies.
boom bust cycles in memory markets - any lessons for SSD?
Looking back at the online storage and cloud market

by Zsolt Kerekes, editor
This market has seen many ups and downs in the past 15 years.

The online backup market flared most brightly at the height of the dotcom boom crazy days in the late 1990s. That convinced me to create a dedicated page for this subject. You can see an archived copy of the online backup page circa 2000 - here. Back then - I called it "Edrives & web based storage" - because "online backup" hadn't yet become a standard term back then.

I was unconvinced about the business models for many of these companies - which mostly relied on unsustainable web advertising. I'd been making my living from the sustainable kind (of advertising) - and knew the difference.

Sure enough - this segment of the storage market got itself a bad reputation for vendor churn and undependability in the long term.

You can get a flavor of how the online backup industry changed (and our web site too) in the years which followed, by clicking these archived links:- Now we're recently experienced another recession (caused by the credit crunch of 2008) and you've got to ask yourself this question...

If banks can fail - then why should you trust ANY online backup provider with your data?

The answer is - you shouldn't. Because history has shown these services can disappear overnight.

But on the other hand - there are many examples of where online backup has helped their customers survive in the event of floods, fire etc.

A pragmatic approach - would be to use 2 different types of offsite backup - which do not have common modes of failure due to sharing software or geography. That's the way ahead for this market.

Now we're seeing new trends in pricing flash arrays which don't even pretend that you can analyze and predict the benefits using technical models.
Exiting the Astrological Age of Enterprise SSD Pricing

SSD ad - click for more info



SSD ad - click for more info

storage search banner

1.0" SSDs 1.8" SSDs 2.5" SSDs 3.5" SSDs rackmount SSDs PCIe SSDs SATA SSDs
SSDs all flash SSDs hybrid drives flash memory RAM SSDs SAS SSDs Fibre-Channel SSDs is published by ACSL