the big SSD ideas which emerged in 2016?|
where are we
heading with memory intensive systems and software?
Editor:- May 29, 2017 - A
on NoCamels.com says Network Appliance
has agreed to buy Plexistor
for $20 million.
Editor's comments:- Plexistor's claim to fame
was Software-Defined Memory - with a chip agnostic approach to
SCM DIMM wars and the
of the enterprise. This acquisition will enable NetApp to play around with
options on that adoption curve in speculative system offerings without risking
too much wasted software in memory dead ends.
a new name in low latency SSD fabric software
March 8, 2017 - A new
SSD software company
- Excelero -
has emerged from stealth today.
Excelero which describes itself as
- "a disruptor in software-defined block storage"
version 1.1 of its NVMesh® Server SAN software "for exceptional Flash
performance for web and enterprise applications at any scale."
company was funded in part by Fusion-io's founder
the press release
fabrics - companies and past mentions
Fabric and other SSD ideas which defined 2016
Speed and Strength Metaphors in SSD brands
a RAMdisk in flash?
Editor:- February 27, 2017 -
The use of
as a RAM tier was being talked about
5 years ago by
since then the market has got used to the idea. And since then as you know if
you've seen the SSD
news pages in recent times there are many different offerings in the market
ranging from NVDIMMs
to software which claim to work with any flash form factor.
good are such systems?
Well there are vendor benchmarks... but
here's another way you might get insights.
A new blog on StorageSearch.com takes an unusual
approach which is a probe into the future of the memory systems market.
stuck my neck out here when I said "this may be a stupid question but...
have you thought of supporting a RAM disk emulation in your new flash-as-RAM
solution?" ...read the
Datrium celebrates one year of NVMe flash difference in its
open converged platform
Editor:- January 24, 2017 - A recent
release from Datrium -
celebrating one year of supporting NVMe SSDs within its high availability open
convergence server storage (software)
(pdf) - discusses
which are inherent in legacy rooted storage architectures in AFAs which
are implemented by SAS or SATA SSDs in comparison to native NVMe SSDs.
benefit of NVMe drives - blistering performance - is unavailable on most
storage arrays today for two reasons. First, an array or hyperconverged design
cycle can only adopt new drive connectivity approaches at a certain rate. As a
rigid, composed system, it takes time. Second, successful flash array vendors
depend on data reduction to optimize pricing. This means the controller CPU
must filter data inline, which adds delay. The benefits of NVMe are
subsequently small because the benefits over SAS links are bottlenecked by CPU
Editor's comments:- The message of
the company seems to be that whereas modern flash storage systems undeniably
have done a great job at reducing infrastructure costs (compared to old style
HDD systems) there is still much more performance and utilization which can be
extracted from COTS servers and SSDs when they're working in a modern
architecture with modern software. See their
2 minute video for the key claimed
The extent of this next level up in performance,
utilization and efficiency (as an industry aspiration) was part of what I was
hinting at in my 2013 article -
meet Ken - and the
enterprise SSD software event horizon.
Primary Data gets ready to expand sales
January 20, 2017 - Primary
that Robert Wilson
has joined the company as the company's new Head of Sales. He previously had
VP level sales roles for
Wilson said - "With flash and cloud storage now common,
and only so much innovation ahead in appliances, many in the storage industry
are wondering what technology breakthrough is coming next."
tiering between memory systems layers - blog by Enmotus
December 8, 2016 - A new blog -
Tiering: the Future of Hyper Converged - by Adam Zagorski,
Marketing at Enmotus
- discusses how hyper-converged infrastructure has evolved along with the
associated impacts from data path latency and CPU overhead. Among other things
Adam notes that...
"Very soon we'll have HCI clusters with several tiers of storage.
In-memory databases, NVDIMM memory extensions and NVRamdisks, primary NVMe
ultrafast SSD storage and secondary bulk storage (initially HDD but giving way
beginning in 2017 to SSDs) will all be shareable across nodes. Auto-tiering
needs a good auto-tiering approach to be efficient, or else the overhead will
eat up performance."
(Auto-tuning SSD Accelerated Pools of storage)
a winter's tale of SSD market influences - new blog on
Editor:- November 15, 2016 - Recently I spoke
President - Charles Tsai.
We talked about many changing influences in the SSD market. I thought
you might be interested to see some of the things we spoke about in a new blog
on StorageSearch.com -
tale of SSD market influences - because it will give you an idea of how
many strategic changes in the SSD market can now influence every business
decision about what new products to create - even when those changing factors
seem at first to be only loosely connected like the flash controller,
industrial SSD, SCM, software and enterprise rackmount SSD markets. But all
those factors entwined the flow of this SSD conversation which really
started 2 years before. ...read the
NVMe flash as RAM - new software from OmniTier
October 5, 2016 - OmniTier today
announced the availability of its software MemStac which for
cloud workloads can
shrink DRAM requirements
by about 8x using standard NVMe SSDs as caches for DRAM.
supports up to 4TB cache capacity per server node at a fraction of the
pure DRAM-only solutions.
OmniTier says that SSD performance in a
standard Intel dual-socket server is upto 3.7M operations per second (100% Get
operations) and 3.2M operations per second (80% Get operations / 20% Set
operations) for 100 byte records.
These levels of performance are
achieved with less than 1mS average latency. In addition, MemStac seamlessly
delivers 10GbE network-limited throughput with typical workloads exceeding 250
bytes in average size, similar to the DRAM-only open-source solutions.
the emerging market impact of hybrid DIMM aware software
September 27, 2016 - I've been asking people in the SSD industry to tell me
what they think were the big SSD, storage and memory architecture ideas which
and became clearer in 2016. As you'd expect - software comes up a lot.
an interesting twist as it relates to SCM.
President & CEO - BeSang
"Storage Class Memory... As storage class memories
are emerging, the memory hierarchy will be changed. NOR-based NVDIMMs, such as
3D Super-NOR and 3D XPoint, will replace DRAM and SSD at the same time. Also,
software-based NVDIMM-P, such as HybriDIMM, will come to the
storage class memory market. Storage class memories mingles fast-but-expensive
volatile, and slow-but-inexpensive non-volatile memories together. As a result,
it will significantly boost system performance at low cost and create huge
virtual latency secret
FlexiRemap software wins award at Flash Memory Summit
August 18, 2016 - AccelStor
announced it has
won the Best-of-Show Technology Innovation Award for its
FlexiRemap software at
the Flash Memory Summit 2016
in Santa Clara, California.
AccelStor's FlexiRemap software improves
performance, cuts down on overhead, and extends SSD lifespan. The technology
achieves sustained performance and reliability even in the random-access
scenarios typical of enterprise storage needs. Thanks to the global
wear-leveling algorithm, FlexiRemap arrays have at least 2x the
compared to typical legacy RAID
5 flash systems.
Diablo gets more funding for Memory1 and DMX software
August 3, 2016 - Diablo
it has secured $37 million across 2 phases of an oversubscribed round of Series
Editor's comments:- that's kind of interesting -
but much more interesting from my perspective was what I learned in a 1 hour
conversation with the company last week about the software for their Memory1
(flash as RAM) product.
Diablo's DMX software is barely mentioned in
the funding PR above. I expected to see more on their web site about this.
also learned how Diablo handles the flash and endurance issues.
aspects were mysterious to me when the product was announced last year. But
it's very straightforward. I'll write about them in an article later this week.
then - if you're wondering - the best way to think about the caching and tiering
side of things is that Diablo's software leverages DRAM on the motherboard.
This DRAM (in another socket) must be present for every 1 or 2
Memory1 modules in the system. And in many respects it uses that DRAM and its
own flash in a similar way
to the early Fusion-io
PCIe SSDs and some of the other tiering, caching products we've seen before like
Diablo's DMX operates in memory layers and also the company has done machine
learning of popular and proprietary apps it might work with so that it
understands the nature of data demand patterns and structures.
says that unlike NVMe and those other
storage cache / tiering
products the benchmarks they've done with Memory1 have much more acceptable
operation - because they don't have the same variability of latency
which occurs when you go through storage stacks and related interfaces.
are always infrequent traffic related congestion and contention problems in any
multi-tiered latency system - even in real physical
These rare clogging events accumulate to bigger
latency numbers in storage interchanges.
The ability to understand
data at the application level and move it in and out of DRAM and flash via the
DRAM bus with native custom silicon
controller support for
the memory movements - gets results which are on average several times better
than the best PCIe based flash cache alternatives - as you'd expect.
But it's the superiority of the worst case latencies - which is the
yea or nay breaking selection point between DIMM based flash and other
DMX also includes a QoS latency feature so that
application developers can retain control of data they like - in DRAM without
having to rely on caching intelligence. More from me on this and the endurance
side of things later.
memory intensive data architecture in a new family of
boxes from Symbolic IO
Editor:- May 25, 2016 - 1 petabyte usable
storage in 2U along with a
flash backed RAM
rich server family which uses patented CPU level aware cache-centric data
reduction to deliver high compute performance are among the new offerings
today by Symbolic IO
which has emerged from stealth mode.
Founder & CEO, Symbolic IO -
said - "This industry hasn't really innovated in more than 20 years, even
the latest offerings based on flash have limitations that cannot be overcome.
Our goal at Symbolic IO was to completely redefine and rethink the way computing
architectures work. We've completely changed how binary is handled and
reinvented the way it's processed, which goes way beyond the industry's current
excitement for hyper-conversion."
Editor's comments:- I
haven't spoken with Symbolic IO but my first impression is that the company is
in line with at least 3 strategic trends that you've been reading about on
StorageSearch.com in recent years:-
company profile summarizes their capability like this... "Symbolic IO is
the first computational defined storage solution solely focused on advanced
computational algorithmic compute engine, which materializes and dematerializes
data effectively becoming the fastest, most dense, portable and secure,
media and hardware agnostic storage solution."
about the company's background see this article -
IO Rewrites Rules for Storage on Information
Revisiting Virtual Memory - read the book
April 25, 2016 - One of the documents I've spent a great deal of time reading
Virtual Memory (pdf) - a PhD thesis written by Arkaprava Basu
a researcher at AMD.
search for such a document began when I was looking for examples of raw DRAM
cache performance data to cite in my blog -
loving reasons for fading out DRAM in the virtual memory slider mix. It was
about a month after publishing my blog that I came across Arkaprava's "book"
which not only satisfied my original information gaps but also serves other
educational needs too.
You can treat the first 1/3 or so as a modern
refresher for DRAM architecture which also introduces the reader to several
various philosophies related to DRAM system design (optimization for power
consumption rather than latency for example) and the work includes detailed
analysis of the relevance and efficiency of traditional cache techniques within
the context of large in-memory based applications. ...read
the book (pdf)
PrimaryIO ships applications aware FT caching
March 8, 2016 - PrimaryIO
its name from CacheBox in
the general availability of its Application Performance Acceleration V1.0
(software) for VMware vSphere 6.
PrimaryIO APA aggregates
server-based flash storage across vSphere clusters as a cluster-wide resource
enabling all nodes in the cluster to leverage the flash caching benefits even
though a subset may already have flash deployed.
awareness, PrimaryIO APA caches critical, latency-sensitive application IOs in
order to boost overall application performance while enabling optimal
utilization of data center server and networking resources.
APA supports write-around and write-back caching with full
in face of node failures since writes to cache are replicated to up to 2
Editor's comments:- in a
brief (pdf) about their technology - PrimaryIO describes how they use
application awareness to intercept data request streams based on its "relative
value and ability to accelerate workload performance." PrimaryIO says this
is more efficient in
its use of flash than traditional approaches and can get good
with a smaller amount of installed SSD capacity than other methods which don't
discriminate so accurately.
worst case response times in DRAM arrays
March 1, 2016 - Do you know what the worst-case real-time response of your
electonic system is?
One of the interesting trends in the computer
market in the past 20 years is that although general purpose enterprise servers
have got better in terms of throughput - most of them are now worse when it
comes to latency.
It's easy to blame the processor designers and the
storage systems and those well known problems helped the
SSD accelerator market
grow to the level where things like
PCIe SSDs and
hybrid DIMMs have
become part of the standard server toolset. But what about the memory?
memory based on DRAM isn't as good as it used to be. The details are documented
in a set of papers in my blog -
loving reasons for fading out DRAM in the virtual memory slider mix.
NSF funds project to progress in-situ SSD processing
December 16, 2015 - NxGn
Data today announced
it has been awarded a Small Business Innovation Research (SBIR) Phase 1
(about $150K) from the National Science
"We've made great strides in developing our
fundamental SSD technology, with a working prototype (of in-situ SSD processing)
now running in our lab," said Nader Salessi,
CEO and founder of NxGn Data.
The grant application says - "This
project explores the Big Data paradigm shift where processing capability is
pushed as close to the data as possible. The in-situ processing technology
pushes this concept to the absolute limit, by putting the computational
capability directly into the storage itself and eliminating the need to move the
data to main memory before processing."
Plexistor aims to bind factions in SSD DIMM wars
Editor:- December 15, 2015 - Plexistor (an SSD
software company emerging from stealth in stages)
today that it is on track for beta release of its Software-Defined Memory (SDM)
platform for next-generation data centers in Q1 2016.
that SDM will support a wide range of memory and storage technologies such as
DRAM and emerging nvm
devices such as
XPoint as well as traditional flash storage devices such as NVMe and
over Fabric, enabling a scalable infrastructure to deliver persistent high
capacity storage at near-memory speed.
File System for Use with Emerging Non-Volatile Memories (pdf) - Plexistor's
presentation at last summer's FMS
- which summarizes the value proposition thus - "Application developers can
focus on business logic, not storage".
Write Cache Integrity Myths - blog from Datalight
December 11, 2015 - Myth
Busting: Using a Write Cache to Improve Performance Means Sacrificing Data
Integrity - a recent blog from Datalight - (which
also includes a 2 minute video) shows the value of having internal mechanisms
and reference points in the file system to indicate that writes have
transparent tiering between persistent memory and flash?
November 26, 2015 - "What should applications developers do about all the
possible permutations (interface and memory technology) emerging in the market
for persistent storage class memory?"
That question is posed by
VP Engineering - Parallel
Machines who goes on to discuss the technical design challenges and
suggests strategies in a recent
In order to be widely adopted, any general
abstraction solution has to embrace the business ambition of delivering
competitively useful performance.
Nisha tackles that concern head on
"Persistent memory can be mapped in multiple ways
depending on the hardware. We need to ensure that each memory type is default
mapped to the optimal model possible for its physical attach."
paper compares and notes the performance boundary choke points of several
popular interface and memory (hybrid DIMM,
server RAM) and suggests that transparent tiering between PM and flash is a
viable software architecture approach which can deliver near optimal
performance for local and remote PM. ...read
Permabit shrinks data in new flash boxes from BiTMICRO
Editor:- October 20, 2015 - Permabit today
that its inline dedupe and compression software is used in new
white boxes from BiTMICRO
the impact of software - raised to the power of 1, squared and
cubed on SSD future complexity
Editor:- October 16, 2015 - I
partly blame punched tape porcupines and code monkeys for contributing to the
complexity of new product permutations now being seeded in the SSD market with
software, software squared and software cubed in a new home page blog on the
mouse site - the
SSD Bookmarks - ver 2.0 (preview).
But it's not just the fault of
Everyone's to blame (or to be congratulated).
only problem is whom do we trust to tell us what's going on?
"There are even more intervention opportunities for
leveraging software (this must be software cubed) for efficiency, making friends
with applications, shunting sacred storage and server precepts into retirement
homes (sometimes with a virtual forklift so that the old time classic
infrastructure doesn't even realize that it's been packed up and already moved
aside into its own little object appartment) to make way for brash new
SSD-everywhere gangs which now have the confidence and the money to know that
the future of the data streets belongs to them (or their successors). ...read the
Radian's Symphonic - Most Innovative Software at FMS
August 18, 2015 - Radian
Memory - which recently emerged from stealth mode - today
it had received a best of show award at the
Flash Memory Summit for
"Most Innovative Software Technology".
This was for its
software product Symphonic -
designed for its new 2.5" NVMe SSDs - which replaces the traditional FTL
with a 3rd generation approach - described by the company as "Cooperative
Flash Management" - which partitions data movement responsibility
between the controller in the SSD and the host CPUs and which enables data
to be moved around the flash array under host control without needing to be
read back into main system memory.
Our products are intentionally sold software-free - says Savage IO
July 28, 2015 - "Our products are intentionally sold software-free, to
further eliminate performance drains and costs caused by poor integration,
vendor lock-in, rigidly defined management, and unjustifiable licensing schemes"
says Savage IO
its SavageStor - a 4U
server storage box with "more lanes of SAS than anyone else".
NexGen decouples from Fusion-io accelerator juice with NVMe
Editor:- June 30, 2015 - As previously signaled - NexGen Storage has
decoupled itself from relying on SanDisk's PCIe SSD
product line in its hybrid storage arrays with the
today that NexGen has introduced NVMe readiness as an update in its software
services. This paves the way for expanding the systems product line with a
wider range of
3rd party internal SSD accelerators with different
Hedvig has amassed $30 million to start fixing broken SDS market
Editor:- June 1, 2015 - Hedvig
- which operates in the SDS market - today
an $18 million Series B funding round (bringing the company's total funding to
date upto $30 million).
Hedvig's founder - Avinash Lakshman -
who is credited with building some of the most successful distributed systems in
the world, including Amazon Dynamo, the foundation of the NoSQL movement, and
Apache Cassandra for Facebook said - "We've identified the potential in a
broken and fragmented storage market, and are not only looking to bring
software-defined storage mainstream, but fundamentally change how companies
store and manage data."
Caringo gets patent for adaptive power conservation in SDS pools
May 19, 2015 -
it has obtained a US patent for adaptive power conservation in storage
clusters. The patented technology underpins its Darkive storage management
service which (since its
in 2010) actively manages the electrical power load of its server based
storage pools according to anticipated needs.
"The access patterns
and retention requirements for enterprise data have changed considerably over
the last few years to a store-everything, always accessible approach and storage
must adapt," said Adrian
J Herrera, Caringo VP of Marketing. "We developed Darkive to help
organizations of any size extract every bit and watt of value while keeping
their data searchable, accessible, and protected."
new power fail safe file system for tiny memory IoT
May 5, 2015 - Datalight
a preview version of
Edge, a power
fail-safe file system for FreeRTOS
which allows developers building IoT devices to reliably store and quickly
access data in embedded
SSDs. It requires as little as 4KB of RAM and 11KB of code size.
a file system which met the high reliability standard set by our (high
Nitro and could fit into tiny microcontroller based systems presented a
challenge and I love a challenge," said Jeremy Sherrill,
architect of file systems for Datalight. "Reliance Edge offers a rich set
of features in a highly efficient architecture."
Reliance Edge can
work with a broad array of storage mediaincluding NOR and NAND flash,
eMMC, SD/MMC, NVRAM,
USB storage, and
PATA) SSDs. Datalight
plans to release new pre-ported kits for other small-kernel OSes over the
software is key to enterprise flash consolidation
April 21, 2015 - In an new article today on StorageSearch.com I look at
drivers, mechanisms and routes towards consolidation in the enterprise SSD
systems market along with some other outrageous and dangerous ideas.
now realize that in their own self interest they have much to gain from
abstracting the benefits they get away from the diverse feature sets of any
single supplier towards a minimalist set of common must-have features which will
satisfy all their needs while giving them independence from failed or greedy
suppliers." ...read the
FalconStor shows why it has taken so many years to launch an
SSDcentric next software thing
Editor:- February 19, 2015 -
You might think there are enough SDS companies already - but SSDcentric data
architectures are pulling system solutions in
directions - so until the dust settles and the landscape looks clearer -
there are plenty of gaps for new companies to enter the market.
most significant this week was FalconStor - who
a new SSDcentric storage pool redeployment and management platform called
FreeStor - which the
company says works across legacy, modern and virtual environments.
says - "The heart of FreeStor is the Intelligent Abstraction layer. It's a
virtual Rosetta Stone that allows data - in all its forms - to migrate to,
from and across all platforms, be it physical or virtual."
They've posted a good
video which describes it all.
FalconStor's natural partners are
enterprise SSD systems vendors and integrators who have good products but who
don't have a complete (user environmentally rounded) software stack.
comments:- For 4 years FalconStor gave me the impression of a storage
software company which didn't know what it wa going to do with the SSD market -
despite having a base of thousands of customers in the enterprise storage
FalconStor's delay can now be explained. They were
studying what needed to be done - and it took a lot of work.
want to understand who else is offering a product concept which is similar in
vision to FalconStor's FreeStor - I'd say
Although due to a difference in ultimate scaling aspirations and markets - I
would say that FalconStor's product is lower end and currently more accessible.
Part of the reason being that FalconStor already has a customer base for pre SSD
era software - which they are hoping to convert incrementally.
$34 million funded SDS company Springpath emerges from stealth
February 18, 2015 - Springpath
emerged from stealth with these related
server based data platform
priced from $4,000 per server per year.
A distribution agreement with
Tech Data who will offer Springpath's
software preloaded onto servers.
funding from investors
Sequoia Capital, New Enterprise Associates (NEA), and Redpoint Ventures
OCZ and Levyx aim to shrink server-counts and DRAM in
real-time big data analytics
Editor:- February 10, 2015 - OCZ and Levyx today
a technological collaboration whereby the 2 companies will develop and
validate a new type of flash as DRAM solution which will be positioned as a
competitive alternative to DRAM
rich server arrays used in many big-data real-time analytics environments.
demand for immediate I/O responses in Big Data environments continues to
increase, our ultra-low latency software paired with high-performance SSDs
represent a better and more cost-effective alternative to traditional scale-out
architectures that rely heavily on DRAM-constrained systems," said Dr. Reza Sadri,
CEO and co-Founder of Levyx Inc. "We are pleased to work with OCZ on this
new usage model as our technology is specifically designed to leverage the
latest in advanced SSD technologies and we'll utilize the
4500 (PCIe SSD) to
deliver the enhanced performance that helps validate our technology."
later comments:- "retiring and retiering enterprise DRAM " was one
of the big SSD
ideas which emerged in 2015.
Primary Data - one of the best known enterprise software companies
in 2016-2017 - emerges from stealth
Editor:- November 19, 2014
- Primary Data
- the most ambitious storage software startup I have ever encountered - today
emerged from stealth mode - with 2 announcements.
more in SSD news
- 1st announcement
that Steve Wozniak
has joined the company as Chief Scientist.
Wozniak who cofounded
1976 - elevated the general
visibility of fledgling Fusion-io
when he joined that company in the Chief Scientist role
The impact of this was assessed and captured in a
2011 blog by Woody Hutsell
- who at that time was working at erstwhile enterprise SSD competitor -
TMS. Woody wrote at
that time (2011)...
"I used Google trends data to see if there was
an inflection point for Fusion-IO and I found it. The inflection point was their
hiring of Steve Wozniak. What a brilliant publicity move... I spent a lot of
time trying to figure out how to create a similar event at TMS. I thought if we
could hire "Elvis" we would have a chance."
November 2014) Primary Data's cofounder - Rick White said...
"With Woz on the team along with Lance and
we now have the band back together, and I'm amped to be reunited at Primary
Enmotus FuzeDrive software - now available
November 4, 2014 - Micro-tiering within the server box - between the lowest
possible latency persistent memory (such as
flash backed DRAM
DIMMs from Viking),
then up a level to SATA
SSDs and finally to hard
drives - gives users materially different performance and cost
characteristics to merely caching between those devices when they are used in
a hybrid storage
That's the message behind the
about the general availability of the company's
server (SSD software)
for Windows and Linux - in which (unlike simple server based cache based
solutions) - FuzeDrive treats the SSD as primary storage and so "all
reads and writes to the hot data occur at full SSD speeds".
SSDs are becoming
in some cases" said Marshall
Lee, CTO and co-founder of Enmotus. "As a result, newer classes of
storage devices continue to appear that can take advantage of higher performance
busses inside servers, NVDIMMs
being a great example."
McObject expands reach of in memory database for serious embedded
Editor:- October 28, 2014 - First 2, then 3 and
finally - 4 interesting things caught my eye in
news about version 6.0 of
eXtremeDB - an in-memory database system from McObject
- Data compression. This release adds data compression for both in-memory
and on-disk databases. Once upon a time compression was a value add feature in
some products - but now in the SSD age when compression is almost latency free
- it has become a must-have on the feature list - especially for embedded
- Avionics platform support. This upgrade adds compatibility with
River Systems' VxWorks 653 COTS platform for delivering safety-critical,
integrated modular avionics applications.
- More flexible transaction scheduling. Applications using eXtremeDB's
multiple-user, single-writer transaction manager can override the default FIFO
scheduling policy within priority levels to favor either read-only or read-write
""Demand for distributed
query processing cuts across market segments, but is especially relevant to the
automation and control field, where eXtremeDB is historically strong"
said McObject CEO and co-founder Steve Graves.
- Distributed query processing support. eXtremeDB partitions a database and
distributes query processing across multiple servers, CPUs and/or CPU cores -
which can accelerate performance.
SSDs, military SSDs,
Efficiency is important for web scale users - says Coho
October 9, 2014 -
as a file system - a web scale case study - a new blog by Andy Warfield , cofounder
and CTO - Coho Data
- made very interesting reading for me - as much for revealing the
authoritative approach taken in Andy's systematic analysis - as for the object
of his discussion (Facebook's storage architecture).
It reveals useful
insights into the architectural thinking and value judgments of Coho's
technology - and is not simply another retelling of the Facebook infrastructure
it you may get different things out of it - because it's rich in raw
enterprise ideas related to
dark matter users.
All of which makes it hard to pick out any single quote. But here are 2.
- re -
match between enterprise products and user needs
says - "In the past, enterprise hardware has had a pretty hands-off
relationship with the vendor that sells it and the development team that builds
it once it's been sold. The result is that systems evolve slowly, and must be
built for the general case, with little understanding of the actual workloads
that run on them."
There are many more I
could have chosen. ...
read the article
- re efficiency
Warfield says - "Efficiency is important. As a rough approximation, a
server in your datacenter costs as much to power and cool over 3 years as it
does to buy up front. It is important to get every ounce of utility that you
can out of it while it is in production."
We need new software abstractions to efficiently handle
persistent enterprise memory - says SanDisk
Editor:- October 3,
2014 - New enterprise software abstractions are needed in order to
those unruly developments in flash,
And laying the
framework for those ideas - along with some practical suggestions for where
applicable solutions might be coming from - is the theme of a recent blog -
Emergence of Software-Defined Memory - written by Nisha Talagala,
Fellow at SanDisk
- who (among other things) says:-
"We're seeing excitement build
for a new class of memory:- persistent memory - which has the persistence
capabilities of storage and access performance similar to memory.
"Given this richness of media technologies, we now have the
ability to create systems and data center solutions which combine a variety of
memory types to accelerate applications, reduce power, improve server
consolidation, and more.
"We believe these trends will drive a
new set of software abstractions for these systems which will emerge as
software-defined memory a software driven approach to optimizing memory
of all types in the data center." ...read
See also:- are you ready to
rethink enterprise DRAM architecture?
Microsoft's SSD-aware VMs - discussed on InfoQ
September 24, 2014 - There are now so many
software companies that keeping track of them all is a little like tallying
2.5" SSD makers -
a tedious chore -which in most cases isn't worth the bother.
- SSD-centric software
important - and some vendors are more important than others - despite having
been latecomers in the
One such company is Microsoft.
news story today - Microsoft
Azure Joins SSD Storage Bandwagon on InfoQ
- discusses Microsoft's D-Series SSD-aware VMs - and places this in the context
of other products from well known sources.
The blog's author - Janakiram MSV says "One
important aspect of SSD based VMs on Azure is that they are not persistent. Data
stored on these volumes cannot survive the crash or termination of virtual
machines. This is different from both Amazon EC2 and Google Compute Engine,
which offer persistent SSDs. On Azure, customers have to ensure that the data
stored on the SSD disks is constantly backed up to Azure blob storage or other
HGST announces 2nd generation clustering software for FlashMAX
Editor:- September 9, 2014 - HGST today
a new improved version of the
clustering capability previously available in the
PCIe SSD product line
acquired last year from Virident.
allows clustering of up to 128 servers and 16 PCIe storage devices to deliver
one or more shared volumes of high performance flash storage with a total usable
capacity of more than 38TB.
HGST says its Virident HA provides a "high-throughput,
low-latency synchronous replication across servers for data residing on FlashMAX
PCIe devices. If the primary server fails, the secondary server can
automatically start a standby copy of your application using the secondary
replica of the data."
For more details see -
Virident Software 2.0 (pdf)
Editor's comments:- This
capability had already been demonstrated last year - and
ESG reported on the
technology in January
But at that time - the clustering product called vShare -
was restricted to a small number of servers - and the data access fabric was
restricted to Infiniband
With the rev 2.0 software - the number of connected devices has
increased - and users also have the lower cost option of using
Ethernet as an alternative
SanDisk extends the reach of its SSD software platform
July 8, 2014 - 2 weeks ago
a new enterprise software product -
ZetaScale - designed
to support large inmemory intensice applications.
I delayed writing
about it at the time - until I learned more. But now I think it could be one of
the most significant SSD software products launched in 2014 - because of the
freedom it will give big memory customers (in the next 2-3 years) about how
they navigate their tactical choices of populating their apps servers with
low latency flash SSD hardware.
what is ZetaScale?
says - "ZetaScale software's highly parallelized code supports high
throughput for flash I/O, even for small objects, and optimizes the use of CPU
cores, DRAM, and flash to maximize application throughput. Applications that
have been flash-optimized through the use of ZetaScale can achieve performance
levels close to in-memory DRAM performance."
ZetaScale is SSD
agnostic. "ZetaScale is compatible with any brand of PCIe, SAS, SATA, DIMM
or NVMe connected flash storage device, providing customers the ability to
choose, avoiding hardware vendor lock-in."
I was curious to see
how this new product - which is a toolkit for deploying flash with tiering to
DRAM as a new memory type - fitted in with other products - from SanDisk and
from other vendors which also operate in this "flash as a big
memoryalternative to DRAM" application space .
So I asked
SanDisk some questions - and got some interesting answers.
- Where does the ZetaScale product come from?
ZetaScale builds upon our
acquisition technology for additional use cases and flash deployment models.
ZetaScale allows any developer to better tune their applications for
flash-based environments, no matter which vendors hardware or interface is being
leveraged. Thus, ZetaScale represents a major step forward in our vision of the
flash-transformed data centerempowering software developers to scale and
enhance their applications to meet today's big data and real-time analytics
demands, while lowering TCO.
- How much commonality is there between ZetaScale and FlashSoft product
ZetaScale and FlashSoft software are complementary and
FlashSoft provides direct-attached flash-based
caching for NAS and SAN devices, with the goal of improving performance for
unmodified applications running on a server.
software provides a flash and multi-core optimization library that applications
can integrate to allow them to achieve 3x times the performance
improvement from flash alone.
Both ZetaScale and FlashSoft software
provide their benefits in bare metal and virtualized environments
- Does ZetaScale support ULLtraDIMM?
Yes. The software is compatible with any brand of PCIe, SAS,
SATA, DIMM or NVMe connected flash device, enabling users to avoid vendor
lock-in. However, the software does not get embedded into any SSD.
Editor's comments:- overall I'd have to rate SanDisk's -
ZetaScale as one of the most significant SSD software products launched in 2014.
- How would ZetaScale fit into a future SanDisk product line which also
SanDisk cannot comment on open M&A activity. As usual, all
planning surrounding the product portfolio and roadmap will begin following the
close of the acquisition.
From a technical point of view - it's a toolkit which will enable
architects of SSD apps servers with very large in memory databases to
decouple themselves fromdeep dives into specific low latency SSD products.
Instead of gambling on whether they should exploit particular features which
come with particular low latency SSDs - they can instead use ZetaScale as the
lowest level of flash which their apps talk to. And that will change markets.
although SanDisk didn't want to comment on how this would be positioned against
Fusion-io's VSL - it's
undeniable that in some applications it does compete today.
wouldn't be surprised to see - a year after the acquisition (if it goes ahead)
ZetaScale could be useful as a way of introducing new customers to the
ioMemory hardware environment - without those customers having to make a hard
commitment to the rest of Fusion-io's software.
And - looking at the
SSD market - it also means that SanDisk software might be a safer standard
for future customers of any DDR4 or HMC SSDs which might emerge from competitor
Micron which - unlike
SanDisk - hasn't demonstrated yet any strong ambitions in the
SSD software platform
say hello to CacheIO
Editor:- June 10, 2014 - CacheIO today
announced results of a
benchmark which is
described by their collaborator Orange
Silicon Valley (a telco) as - "One of the top tpm benchmark results
accelerating low cost iSCSI SATA storage."
CacheIO says that the 2
million tpm benchmark on CacheIO accelerated commodity servers and storage
shows that users can deploy its flash cache to accelerate their database
performance without replacing or disrupting their existing servers and storage.
Editor's comments:- The only reason I mention this otherwise
me-too sounding benchmark is because although I've known about CacheIO and
what they've been doing with various organizations in the broadcast and telco
markets for over a year - I didn't list them on StorageSearch.com before.
was partly because they didn't want me to name the customers they were working
with at that time - but also because with
SSD caching companies
becoming almost as numerous as tv stations on a satellite dish - I wanted to
wait and see if they would be worth a repeat viewing. (And now I think they
Decloaking hidden segments in the enterprise
May 28, 2014 - StorageSearch.com
today published a new article -
hidden segments in the enterprise for rackmount SSDs
Some of the
world's leading SSD marketers have confided in me they know from
their own customer anecdotes that there are many segments for enterprise
flash arrays which aren't listed or even hinted at in standard models of
the enterprise market.
Many of these missing market segments don't
even have names.
The glut of new flavors in SSD software - mostly
boldly promising new architectures but sometimes with a better plea
bargain for legacy - is one of the biggest segment multiplying factors.
Fusion-io demonstrates life and capacity amplification effects
of combining 2 software ingredients
Editor:- April 2, 2014 - In a
demonstration this week Fusion-io showed the
combined advantages of using NVM compression in conjunction with its Atomic
Writes APIs in SkySQL environments. The results indicate that:-
- 2x as much data can be stored on the same flash media - while
giving similar performance and latency to the uncompressed case with legacy
comments:- compression has been used as a secret invisible
helper inside enterprise flash SSD systems (and as a way to speed up
performance and housekeeping functions such as garbage collection) starting in
2007 with MFT
flash management software from
- using compression and the new APIs - reduces write traffic and improves
endurance limited operating life by a factor of 4x
2009 onwards -
invisible compression speedup and
boosting became widely adopted in the industry - as they were both intrinsic
parts of every SSD
controller shipped by SandForce.
WhipTail was the first
enterprise SSD array vendor I knew of to offer inline time compression as
an explicit feature which users could turn on or off - to increase usable
virtual capacit. That was in
- and James
Candelaria (who at that time was WhipTail's CTO) mentioned this as an
attribute in his
for StorageSearch.com readers in September 2010.
However, in a later
conversation (January 2012) with Cameron Pforr
(who at that time was WhipTail's President and CFO) - Cameron told me they
were no longer emphasizing compression because it led to latencies which were
too long to be competitive - and instead they were focusing on performance.
Since those days many leading SSD array makers have used compression
to offer tactical advantages in their products - particularly in cost sensitive
markets like iSCSI. And
compression and more efficient software are just some of many ingredients I
identified in last year's article
thinking inside the SSD box.
To sum up - Fusion-io's
demonstration this week simply confirms what anyone who knows their product line
well would have already expected.
compression - editor mentions on StorageSearch.com
VMware enters the SSD market
March 6, 2014 - With
the launch of its
Virtual SAN - VMware has at last
joined the crowding SSD
software ecosystem as a lead SSD player rather than (as before) in a
subordinate role (as the
dancing partner - a bit like dancing with your uncle or aunt at the wedding
disco) which was the case before in
of acceleration compatibility stories narrated by other SSD companies.
version 1.0 is an SSD
ASAP (hybrid virtualizing appliance) - which supports 3-8 server nodes. The
company says that "support for more than 8 will come later." ...read the
Editor's comments:- first impressions? It's
late and doesn't look great (in features). But it will probably be deemed
adequate for many users starting down this road.
Before dismissing it
entirely (as some commentators and competitors have already done) let's
remember that when LSI
entered the SSD market in
January 2010 -
it was the "163rd company to enter the SSD market". And look
where they are now.
late to market doesn't count as a mortal sin in the SSD marketing lexicon
right now because
mover advantage (pdf) assumptions aren't valid in this phase of the
more comments re VSAN
customers who had the opportunity to participate in the VSAN beta told us that
in most cases, (our) Maxta MxSP performs better" - said competitor Yoram Novick,
founder Maxta in his blog
Storage the Devil is in the Details
proud of how the team has outperformed expectations. Today we're announcing GA
support for 32 nodes. That means that Virtual SAN can now scale from a
modest 3 node remote office, to a multi-petabyte, mega-IOPS monster just
by adding more server resources... and ...VSAN isn't bolted on, it's built in."
- says Ben Fathi,
VMware - in his blog -
SAN: Powerfully Simple and Simply Powerful
old software will slow new silicon in memory done by SSDs
February 5, 2014 - In a new blog -
Vistas For Persistent Memory - Tom Coughlin,
Associates reminds us that in exteremely fast SSDs - lowering the
hardware latency is just one part of the design solution.
Tom says -
"An important element in using persistent memory in the PCIe and memory
bus of computers is the creation of software programs that take advantage of the
speed and low latency of nonvolatile memory. With the increase in performance
that new interfaces allow, software built around slower storage technologies
becomes a significant issue preventing getting the full performance from a
persistent memory system."
Tom's article includes a graph which
shows the increasing proportion of the read access time taken up by system
software in successively faster hardware interface generations. ...read
Editor's comments:- living with the old
while planning for a new type of SSD-aware computer architecture is
Just how complicated that picture can be... you may
glimpse in a classic far reaching paper (about abstracting application
transactional semantics in usefully different ways when viewed from their
interactions with the flash translation layer) - called
Operations via the Flash Translation Layer (pdf) by Gary Orenstein,
SNIA proposes new standard for virtualizing SSD implemented
Editor:- January 27, 2014 - It's years since the first
SSD software horses
were seen to be leaving the stables - but last week - a
standards ORG - SNIA
an effort to bolt these doors with the release of version 1 of what it hopes
will be a new standard called the
Programming Model (pdf)
Editor's comments:- Currently if
you use SSDs as memory using
PCIe SSDs from
Virident, or if you
plan to use memory channel SSDs from
SanDisk - then you're
potentially looking at working in 3 different software environments.
viable permutations of hardware and software compatibility levels shrink for
users when they converge at a popular market application level such as
virtual desktops - but explode into crazy unsupportability for 3rd party
software developers as they try to step back from proprietary APIs and hang
onto more general hooks in operating systems which were never designed around
the core class of capabilities offered by low latency SSDs.
the long term solution to the
state of ad hoc SSD software lies in adapting current OS's - or maybe in
bypassing old OS's entirely and starting again with cloud level service-like
abstrations in virtualized servers - is interesting to speculate.
In the meantime software developers have to work with existing de-facto software
environments (to generate revenue) and also keep an eye on future standards in
the hope that standardization will reduce their costs (one day in the remote
The SSD software platform and the optimum level of
engagement for vendors is a lottery which will suck billions more dollars from
VCs before it is resolved. And I think that market dominance will be a bigger
part of the solution than a set of committee based standards.
Maxta joins the elite set of enterprise contenders who are
vying to own the next generation SSD-centric platform
November 13, 2013 - This week Maxta completed its
staged emergence from stealth mode and
its first product - the Maxta Storage
Platform - a hypervisor-agnostic software platform for repurposing arrays
of standard servers (populated with cheap standard
SATA SSDs and
hard drives) into
scalable enterprise class apps servers in which the global CPU and storage
assets become available as an easily managed meta resource with optimized
performance, cost and resilience.
Editor's comments:- I spoke
last week to Yoram
Novick about this new product, his company and what customers have been
doing with it.
Before you dip into my bullet points below - here's a
header note of orientation.
We've all seen new companies launching
SSD software and pitching for the enterprise with products which are little
more than spruced up versions of "hello SSD world!"
year later - some essential compatibility features get added, and later still
some degree of better or worse
It didn't used to matter much if everything wasn't in place at the start - or
if these new companies didn't have sustainable business plans - because there
was an appetite for acquiring
From my perspective I'd say that many companies have
regarded the launch of their SSD software is simply an invitation to attract
users who could provide the market knowledge they needed to flesh out the
In these important respects Maxta is different because:-
prior to this week's product launch they've already had a group of 10 or so
advanced customers in different industries who have been using the product and
also the enterprise features - like manage-ability, scalability, resilience and
data integrity are already in the product today.
Maxta's technology and
business architects have done enterprise storage software before - as you can
see from their linkedin
bios. Yoram told me that he and
Amar Rao (Maxta's VP of
Business Development ) used to compete with each other in earlier storage
startups and the companies which had acquired them.
So it soon became
clear to me in the details I saw and asked about (not all of which are listed
here) that a lot of careful planning and up front thinking and problem solving
has already guided the "launch".
Here's some of what I
- market scope
MxSP is the software glue for enabling easily managed
SSD enhanced storage
pools in VM environments which scale from the ROBO upto
base level configuration which provides HA features starts as low as 3 nodes.
This is attractive for enterprises with remote offices because it's a small
footprint. But it's also attractive from a running cost point of view too -
Yoram said because of the special low price point for associated software.
has a customer who started with these 3 node configurations for remote offices
but liked them so much that their bigger arrays are now built mostly from
arrays of 3 too.
- the problem it solves
The evolution of enterprise CPU and storage
resources have followed different tracks in the past decade - leaving users in
the position today where it's easy and economic to deploy more CPUs but
relatively awkward, expensive or error prone to map these CPU resources into
virtual storage which scales with the same ease and which takes advantage of
the low cost and high performance of commodity enterprise SSDs.
- the storage pool
Maxta's architecture aggregates the SSDs and HDDs
in the server pool into a single globally accessible, fault tolerant SSD
accelerated virtual storage pool.
Within Maxta's software - all the
SSDs are collected together as 1 super SSD resource and another big resource
is created from the HDDs.
Internally Maxta's software knows that SATA
SSDs and SATA HDDs have different personalities for example:-
- HDDs have low cost per unit of capacity but slow random read latency
- SSDs have fast random read, and fast sequential write
every node in the array has to have an SSD or HDD inside but it's not sensible
to have a system which doesn't have any SSDs at all.
- fault tolerance, data integrity, VM snapshots, cloning etc
they're all in the product now.
- software? - it's a virtual world view
Everything about MxSP is
virtual. And it doesn't require new management tools. The operational aspects
will clarify in customer case studies and white papers.
- Maxta's business plan
I told Yoram how disillusioned I had become
about the sustainability and viability of new storage software companies -
given my experience of having tracked over 1,000 storage companies and
terminating the list of
gone-away and acquired companies in a single decade at the 500 company
level. (That's before I started the gone-away SSD companies list BTW - which is
well on its way to 100.)
Jaundiced by that experience it seems to me
that over 95% of storage software startups don't have much of a clue about how
to translate their IP assets into any sustainable business value and are mostly
founded at the outset with the fervent desire that before the VC and IPO
money run out - they will get acquired. So I asked him if Maxta would be any
different to that?
Yoram told me some of what Maxta has been doing in
laying the foundations for growing the business to become a significant storage
platform (in his words) a significant software company like Microsoft or
I won't say more here because this is too long already -
despite having not even mentioned most of the notes I made during our
Looking back on this nearly a week later (and having seen
some of their documents before) I'm left with the impression that maybe indeed
Yoram is right and his company could become not only one of the rare storage
software companies which are sustainable as a business. But going further than
that - maybe too it has the makings of a company which could be one of the
five to ten companies which will dominate the SSD software platform market of
Who are the other contenders?
I've given you
lists before - but this list is evolving because 4 of the 10 companies were
still in stealth mode last time I did that.
If you're interested in
the SSD enhanced storage platform idea (and who wouldn't be) then another good
place to look is the list of competitors which I've compiled in
Maxta's profile page.
new blog by PernixData describes the intermediate states of play
for its HA clustered write acceleration SSD cache
November 5, 2013 - In a clustered,
SSD ASAP VM
environment which supports both read and write acceleration it's essential to
know the detailed policies of any products you're considering - to see if the
consequences - on data vulnerability and performance comply with strategies
which are acceptable for your own intended uses.
In a new blog -
Tolerant Write Acceleration by Frank Denneman
Technology Evangelist at PernixData
describes in a rarely seen level of detail the various states which his
company's FVP goes through when it recognizes that a fault has occured in
either server or flash. And the blog describes the temporary consequences - such
as loss of acceleration - which occur until replacement hardware is pulled in
and configured automatically by the system software.
Stating the design
principles of this product - Frank Denneman says - "Data loss needs to be
avoided at all times, therefore the FVP platform is designed from the ground up
to provide data consistency and availability. By replicating write data to
neighboring flash devices data loss caused by host or component failure is
prevented. Due to the clustered nature of the platform FVP is capable to keep
the state between the write data on the source and replica hosts consistent and
reduce the required space to a minimum without taxing the network connection too
ASAPs - auto tiering / caching appliances
Permabit has shrunk data storage market by $300 million
Editor:- September 30, 2013 - Permabit today
and hard disk customers have shipped more than 1,000 arrays running its
RAID) software in the past 6 months.
"We estimate that our
partners have delivered an astonishing $300 million in data efficiency savings
to their customers" said Tom Cook, CEO of Permabit
who anticipates license shipments to double in the next 6 months.
efficiency, new RAID in
Proximal Data announces AutoCache version 2
August 26, 2013 - Proximal
the release of version 2.0 of
AutoCache (SSD ASAP software ).
Pricing starts at $999 per host for flash caches less than 500GB. The company
has been demonstrating the new version working with
PCIe SSDs from
Micron at VMworld.
Enmotus demos FuzeDrive hybrid array software
August 13, 2013 - Enmotus
announced that it
is demonstrating its FuzeDrive
(hybrid SSD ASAP)
solutions (with Toshiba
SSDs inside) at the Flash
"While helping accelerate early adoption
of SSDs, today's caching solutions don't always provide the results users
expect. FuzeDrive avoids using traditional caching techniques, and instead
borrows its concepts from intelligent real time virtualization, data movement
and storage pooling techniques typically found in larger 'big iron' enterprise
systems," said Andy
Mills, CEO and Co-founder of Enmotus.
how new SSD software gets things done faster
July 29, 2013 - "One of the ironies of legacy systems software running in
flash systems is the way that the data weaves through layers of fossilized
unreality where emulation is stacked on emulation." - from the news page
Atomic Writes, and a faster way
for the Princess to get her shoes
Do you have impure thoughts about deduping SSDs?
March 28, 2013 - What comes to your mind when you think about
theoretical ratio? - x2, x5, x10...
Or maybe you groan? - It's too
messy to manage and even if capacity gets better, something else gets worse
- so let's just forget the idea...
A new blog -
the SSD Dedupe Ticker - by Pure Storage
looks at the state of customer reaility in this aspect of SSD array
technology and comments on the variations you can get according to the type of
app and the way of doing the dedupe.
Among other things the article
also looks at the biggie question - of performance impact - answering the
author's rhetorical question - "why hasn't deduplication taken the primary
storage world by storm like it has the
backup world?" ...read
Nimbus brings flash SMART plus stats to SSD rackmounts
March 25, 2013 - Nimbus
Data Systems today
new software APIs which support its proprietary
HALO OS based family
of rackmount SSDs
- and report on hundreds of real-time and historical metrics such as:-
flash endurance, capacity utilization, latency, power consumption, deduplication
rates, and overall system health. Another new feature is that sys admins can
monitor their Nimbus
SSD arrays via new apps on Android / Apple phones and tablets.
CEO and founder of Nimbus Data said the new software framework would enable
cloud architects and enterprise customers to gain greater insight into their
flash storage by viewing internal aspects of their flash storage which
mattered to them - rather than simply relying on benchmark indicators which
have been cherry picked by vendors or reviewers
Software - a new reason to reconsider Intel's server SSDs
February 13, 2013 - Intel
that in the next 30 days it will ship a Linux version of the SSD caching
software - based on IP from its acquisition of
NEVEX last August. The
products have been rebranded as
CAS (Cache Acceleration Software).
Editor's comments:- I
would categorize Intel's current generation of enterprise SSD solutions
(which includes the same old indifferent SSDs working with the new CAS software)
as being in the medium to fast-enough performance range.
customers might be end users who have never used SSD acceleration before - or
users with apps which don't need the higher speeds offered by competing SSD
bundled drive / module packages from
OCZ - and customers who
don't want to do their caching via dedicated rackmount based products from
the dozens of other vendors listed in the
SSD ASAPs directory.
market segment addressed by these new Intel products is the early
majority of enterprise SSD adopters - who will be reassured by the
perceived safety of buying into the dangerous world of solid state storage
acceleration from a value based brand.
I spoke about the new CAS
software to Intel product manager Andrew Flint
who cofounded NEVEX and I
learned some useful things about the product.
The first question I
asked was - how many PCIe SSDs can the CAS product support in a single server?
And were there any graphs showing how performance drops off or is maintained
when you do that.
The answer was - this info isn't publicly available
right now. Although it may be in the future.
That's when I concluded
that Intel CAS (married to current generation Intel SSDs) isn't a fast
product - and is not in the kind of performance league where a user would
seriously worry about this type of
Intel's ideal end-user customers right now for CAS are
people who have been using no SSD acceleration at all coupled with hard drive
arrays. That performance
silo could change - with faster Intel SSDs in the future - and isn't due
to limiting characteristics in the software.
I asked - Does it support
3rd party SSDs?
I was told - the standard release only supports Intel
SSDs. But there's nothing in principle to prevent it being used with other SSDs
using the open source release of the software.
The product is a read
cache. I was told that it makes very good use of whatever RAM is in the server
to optimize both read and write performance. However, my view is that as Intel
SSDs aren't fast - this is somewhat academic.
I asked about the time
constants which are analyzed by the caching software - and learned that -
depending on the app - the data usage period which is analyzed goes up to days.
(Generally in this type of product longer is better - and when you go up from
milli-seconds and seconds to minutes, hours and days - you have the potential to
get better caching results.)
I learned that Intel CAS isn't written
around the data structure or interface - and is hardware agnostic. Users can
tell the software which apps they want to cache - via a control panel. This is
very useful in environments where a single server is running a mix of apps -
some of which are critical (in performance needs) while others are not.
asked - does the CAS have to have advance knowledge of the app? - Is it
optimized for a preset list of apps?
I was told - No. It will work
just as well for - what I called - dark matter software- which might be a
proprietary app which no one else knew about.
I asked if Intel collects
stats from the general population of installed servers which use the software? -
in order to improve tuning algorithms...
I was told - No. The
optimizations (data eviction probability rates) are done based on what is
learned on the customer's own server and private data - and the factory shipped
software. There isn't a wider intelligence learning or gathering or snooping
I learned that a special feature of this Intel CAS release
is the ability to share cache resources with a remote SSD. The data stays hot
and doesn't have to be recreated when different virtual machines are accessing
this type of resource.
Overall I came away with a good impression of
the CAS software and how well the NEVEX technology idea has been assimilated
into Intel's SSD business.
It will undoubtedly help Intel sell more
SSDs to people who have never used enterprise SSDs before - and maybe also to
people with low end apps who have used SSD acceleration before but whose
first choice of SSDs wouldn't otherwise have been Intel.
aligning database block sizes with SSDs
February 5, 2013 - I was only saying to someone yesterday that I've had
emails from readers who are designing
software for SSDs who
- having researched the subject of
flash etc - then spent
too much time over-worrying about internal SSD hardware details that they
really shouldn't be worrying about - because by the time they learn about it -
that type of hardware issue is ancient history.
By a curious
coincidence today I came across a recent blog by Chas. Dye at Pure Storage
DON'T Fiddle with Your Database Block Size! - which also warns about this
Chas says - "At Pure Storage, we believe that a factor
that should never influence the block size decision is your storage subsystem."
comments:- I'd certainly agree that trying to slavishly make your data
structures look like something you've read about which might be inside an
SSD controller is
probably a waste of time - because unless you know the SSD designer you don't
really know what's going on - and the abstraction you read about in some web
site is only a small part of the picture. If an
SSD is so sensitive to
the data you hit it with - it's not the SSD you should have bought in the
Enmotus demos its SSD ASAP technology
November 27, 2012 - Enmotus
is demonstrating its auto-tiering software - which it calls
MicroTiering technology (pdf) - for the first time in public this week at
the Server Design Summit.
OCZ's new VXL software release includes fault tolerant support
for arrays of PCIe SSDs
Editor:- October 23, 2012 - OCZ today
a new version (1.2 ) of its
cache and virtualization software - which provides high availability,
synchonous replication and enhanced VM performance across arrays of the
company's Z-Drive R4 PCIe
The company says this assures that host-based flash is
treated as a continuously available storage resource across virtualized clusters
and yields no data loss and no VM downtime even during complete server
"By combining the power of storage virtualization and
PCIe flash caching, and by working centrally with the hypervisor rather than
with each local VM, we have developed a solution that takes full advantage of
flash without losing any of the benefits associated with virtualization,"
said Dr. Allon Cohen,
VP of Software and Solutions, OCZ. "VXL's ability to transparently
distribute flash resources across virtualized environments provides IT
professionals with a simple to implement solution..."
AMD will rebrand Dataram's RAMDisk software
September 6, 2012 - Dataram
it will develop a version of its RAMDisk software which will be rebranded
by AMD in Q4 under the name of Radeon
RAMDisk and will target Windows market gaming enthusiasts seeking (upto 5x)
faster performance when used with enough memory.
AutoCache for PCIe SSDs
Editor:- July 23, 2012 -
immediate availability of its first product - a
SSD ASAP - designed
to work with PCIe SSDs - in particular - products from
for cache sizes less than 500GB) reduces bottlenecks in virtualized servers to
increase VM density, efficiency and performance. The company says it can
increase VM density upto 3x with absolutely no impact on IT operations.
Editor's comments:- here are some questions I asked about the
new product - and the answers I got from Rich Pappas,
Proximal's VP of sales and business development.
Editor:- How long
does it take for the algorithms to reach peak efficiency?
Pappas:- It varies by workload, but typically it takes about 15
minutes for the cache to warm to reach peak efficiency.
the caching only on reads, or is it effective on writes too?
AutoCache will only cache reads, but by virtue of relieving the backend
datastore from read traffic, we have actually seen overall write performance
improvements as well. This effect is also dependent on the workload.
Amazon offers explicit SSD performance in the cloud
July 19, 2012 - There are many ways SSDs can be used inside
classic cloud storage
services infrastructure:- to keep things running smoothly (even out
IOPS), reduce running
Web Services recently launched a new high(er) IOPS instance type for
developers who explicitly want to access SSD like performance.
3 to 5 years time all enterprise storage infastucture will be solid state -
but due to economic necessities it will still be segmented into different types
by speed and function - as I described in my
SSD silos article -
even when it's all solid state.
I predict that when that happens -
AWS's marketers may choose to describe its lowest speed storage as "HDD
like" - even when it's SSD - in order to convey to customers what it's
about. It takes a long time for people to let go of old ideas. Remember
Virtual Tape Libraries?
Nutanix announces a new NFS for PCIe SSD accelerated CPUs
June 12, 2012 - Nutanix
announced the general availability of NDFS (Nutanix Distributed File
System), a bold new distributed filesystem that has been optimized to leverage
localized low latency PCIe
SSDs such as those from Fusion-io.
shifting the NFS datapath away from the network directly onto the VMware vSphere
host, NDFS bypasses network communications that have historically been fraught
with multiple high-latency hops between top-of-rack and end-of-row switches.
Nutanix accelerates both read and writes for any workload.
availability are achieved by data mirroring across high-speed 10GbE
Editor's comments:- Nutanix is in the
SSD ASAP market -
equivalency architecture integrated in the OS. The company says their
architecture "collapses compute and storage into a single tier." You
can get the general idea from their
|STORAGEsearch is published by
| Spellerbyte's software factory |
|In the past we've always
expected the data capacity of memory systems (mainly DRAM) to be much smaller
than the capacity of all the other attached storage in the same data processing
|after AFA? -
cloud adapted memory? |
|Getting acquainted with the
needs of new big data apps |
|Editor:- February 13, 2017, 2017 - The nature of
demands on storage and big memory systems has been changing.|
new storage applications by Nisha Talagala,
VP Engineering at Parallel
Machines provides a strategic overview of the raw characteristics of
dataflows which occur in new apps which involve advanced analytics,
machine learning and deep learning.
It describes how these new
trends differ to legacy enterprise storage patterns and discusses the
convergence of RDBMS and analytics towards continuous streams of enquiries.
And it shows why and where such new demands can only be satisfied by large
capacity persistent memory systems.
|Among the many interesting observations:-
- Quality of service is different in the new apps.
is rare. Instead the data access patterns are heavily patterned and initiated by
operations in some sort of array or matrix.
concludes "Opportunities exist to significantly improve storage and memory
for these use cases by understanding and exploiting their priorities and
non-priorities for data." ...read
- Correctness is hard to measure.
And determinism and repeatability
is not always present for streaming data. Because for example micro batch
processing can produce different results depending on arrival time versus event
time. (Computing the right answer too late is the wrong answer.)
SSD software news
where are we
heading with memory intensive systems?
trust SSD market data?
where are we now
with SSD software?
how fast can your SSD
hidden segments in the enterprise for SSD boxes
from the enterprise SSD software horizon
|It sounds simple enough...
New Dynasty is a software environment and architecture which is planned at the
outset to operate with SSDs. But adding SSD software into the mix brings its
own multiplication factors.
What does a server node look like? How is it clustered or scaled? Is
the server node part of the storage? Is the server node a building block for
all the storage? Where should the storage live? How should it be tiered?
hidden segments in the enterprise for rackmount SSDs|
|"why are so many
companies piling into the SSD market - when even the leading enterprise
companies haven't demonstrated sustainable business models yet?"|
|hostage to the
fortunes of SSD|
| what shoes does the
Princess need now?|
|Editor:- July 29, 2013 - One of the
latency reducing tricks in a world where every SSD vendor has access to the same
flash memory and
interface chips and choice
is the applications magnifying power of
SSD software. |
the way that new SSD software gets things done faster is to avoid doing
some things at all - by carefully discriminating between - what needs to be
done - compared to what would normally get done in blind obedience to
One of the ironies of legacy systems software running in
flash systems is the way that the data weaves through layers of fossilized
unreality where emulation is stacked on emulation - and hardwired into
the software and data flow logic are the remembered
once-deemed-to-be-efficient solutions to data flow control problems whose
origins are now almost forgotten.
So the SSD emulates a hard drive.
And the hard drive emulates memory.
And it gets worse.
The fetching and prefetching and polite but useless flurries of activity which
happen behind the scenes makes it appear more like a bunch of courtiers in a
fairy tale palace reacting to this simple request.
The Princess needs
What shoes? What color? What style? What for?
hasn't said yet - just get as many shoes as you can carry and be quick about it!
Yet despite all this background mayhem the application - somehow -
still runs faster on SSDs than on the old hardware. (And the Princess has never
been seen in public without wearing appropriate footwear.)
way to save time (improve latency) is to say - what if instead of just
speeding up all the tangled processes of emulating a hard drive emulating
memory and worrying about all the old fossilized limits of packet sizes and
flow control in drives and interface cards which no longer exist except in
museums but which have been preserved in legacy software - we instead
make an effort to write some new software which knows it's operating in a flash
world and doesn't have to recite old HDD spells to charm the data?
what-if the Princess knows where the shoe room is - and rather than wait -
she's going to get the shoes for herself?
The implications of these
what-if? results (for SSD software) are easy to anticipate and we've seen what
happens when these ideas have
found their way
into SSD benchmarks but
it still takes time for these new ideas to work their way into standard software
And if the Princess changes her mind between the time she
sets off to the shoe room and when she gets there - she's still going to get the
shoes she wants quicker than
if she asked her maid.
All of which is a preamble to say that Fusion-io last week
that its Atomic Writes API contributed for standardization to the T10 SCSI
Storage Interfaces Technical Committee is now in use in mainstream MySQL
databases MariaDB 5.5.31 and Percona Server 5.5.31.
Princesses prefer not to be kept waiting.
|"SSD is going down! -
We're going down!"|
If you've ever watched the movie - Black Hawk
Down - there's a memorable scene in which...
sudden power loss|
|If you've seen or read -
The Hobbit - then you'll be familiar with the concept of the riddle game.
Something similar is playing out now in the enterprise flash array
The setting? I forgot to mention this.
The hero - a mythical hobbit-like creature called "User" is
trapped in a high gravity well / force-field - just outside the entrance to a
cave in which are stored great treasures.
| playing the SSD box
inside the box|
|Editor:- May 29, 2013 - If you're an enterprise
user who is already sold on the idea of using more SSDs - what could be
better than a great new SSD drive?|
If you're an SSD
vendor looking for the magic formula to open up vast new untapped markets
for SSDs - what kind of solution do you need to offer to attract enterprises
who aren't at the sharp end of the performance pain curve, are content with the
speed they get from HDDs and who aren't even looking at SSDs for their network
These problems have been preoccupying the SSD industry's
smartest thinkers for years.
And their answer to both questions is
the same. (Although details vary).
It's a new type of SSD box.
A new generation of enterprise SSD rackmounts is breaking all the
rules which previously constrained price, performance and reliability. The
sum impact of cleverly designed SSD arrays is systems which are many times
more competitive than you would imagine from any tear-down analysis of the
g about rackmount SSDs is explored in the new home page
blog on StorageSearch.com
thinking inside the box.
impact from RAID rebuilds becomes compounded with long rebuild times incurred by
mutli-terabyte drives. Since traditional RAID rebuilds entirely into a new spare
drive, there is a massive bottleneck of the write speed of that single drive
combined with the read bottleneck of the few other drives in the RAID set."|
CEO - SolidFire
- in his recent
blog - Say
farewell to RAID storage (March 14, 2013).|
RAID & SSD
|the Modern Era of SSDs |
|Editor:- January 2, 2013 - My recent home page
Transitions in SSD - mentions some of the key changes in the SSD market
which took hold in recent quarters - but as we're starting another new
calendar year in SSD - I want to say more about the context here.|
in a market which appears to be so fast moving as the SSD market - where hot new
SSD companies can enter the
top SSD companies list
(ranked by search) within weeks of exiting stealth mode, and some new
SSD companies are
acquired within a few quarters of launching their first product - it can
still take years before new technologies which excite technologists,
investors are adopted by more than 10% of SSD users.
strategic multi-year big changes and transitions which are sometimes hard to
pin down to a single year. For example the transition in the enterprise SSD
market from RAM
to 98% flash - which took 8 years.
Although it's easy to
recognize the start of new technology changes - it's harder to be so precise
about big market shifts - because those - by their very nature - occur only
when enough people get hold of a new way of doing things and change their buying
at the SSD market - 2013 now clearly marks the 10th anniversary of a
distinct market period which I now think of as - the Modern Era of SSDs.
What do I mean by the Modern Era of SSDs?
It's when SSDs
changed from being a niche tactical technology which satisfied the needs of
some markets (ruggedized military / industrial storage and next generation
server acceleration at any cost) to a time when the market advance of SSDs as a
significant well known core market within the computer industry became a
historical inevitability - and when the only serious technology which could
displace an SSD from its market role was another SSD.
products which we would recognize as enterprise SSDs were shipping for several
years before 2003 - it was in that year, 2003 - when there was enough confidence
in the minds of enough people in the SSD market that the future of SSDs could be
much bigger (100x bigger) and different to what had happened before.
wasn't simply my publication of
an article at the time
which explained why this could happen - nor simply the immediately post
publication discussions I had with SSD industry leaders at the time - nor indeed
in later years when founders and managers of new SSD companies kindly
told me that some of their thinking about the possibilities for the SSD
market had been influenced by those earlier articles on StorageSearch.com
It's just as much the case that the alternative futures which could have knocked
the SSD market off-course (such as faster CPU clock rates,
hard drives or faster
optical storage) didn't
The year after year "no-shows" by SSD's past
phantom demons were just as important as the new SSD technologies which did put
in an appearance.
Today it's clear to anyone looking seriously at
the data economy - the SSD market is here to stay and has its sights set on
being at the center of your future hardware and infrastructure decision making.
to big upcoming changes in SSD market thinking?
Can I say anything
at all useful at this stage about what the 2nd decade of the modern era of SSDs
will be like?
I think it will be the time when a critical mass of SSD
users become more sophisticated in their understanding and use of different
types of SSDs - and when each part of the SSD market becomes less generalized
and more focused.
It's not just about the
SSD software, and
iit's not just about the
SSD chip technologies.
These simply outline possibilities. What's important - and what will become even
clearer - is the dividing lines and colors of application specific SSDs.
specific enterprise SSDs - is a technology trend which started
shipping more than 3 years ago. But - as I said above - markets happen when
enough people have decided to make them happen - and not simply because
pioneering products are available.
|"In some ways, blocks
lost due to media corruption present a problem similar to recovering deleted
files. If it is detected quickly enough, user analysis can be done on the
cyclical journal file, and this might help determine the previous state of the
file system metadata. Information about the previous state can then be used to
create a replacement for that block, effectively restoring a file."|
CRCs are important - blog by Thom Denholm Datalight (January
|In October 2002 - StorageSearch.com's
editor talked about the role of software versus human-ware in enterprise hot
"Until the storage management software
you run in your orgazination is intelligent enough to learn by itself what kinds
of applications you're running, and optimize the characteristics of your
different types of storage devices, your ability to make the best use out of new
storage technologies such as SSDs will be limited by your own technical skills
and the amount of work and effort you are prepared to put into solving your own
performance and resource utilization problems."
|Ancient storage software
management inhibits roadmap to $5 billion enterprise SSD market -
StorageSearch.com's news page blog (October 2002)|
|In November 2002
- Bill Gates, talking about Tablet PC's said:- "There are also a
lot of peripherals that need to improve here. ...Eventually even the so-called
disks will come along and not only will we have the mechanical disks going
down to 1.8 inch but some
kind of SSD... will be part of different Tablet PCs."|
|"In May 2003 -Imperial
Technology launched the world's first SSD tuning software tool called -
WhatsHot SSD - which analyzed real-time file usage on the SAN to identify
hot-files to place in SSD."|
|"In May 2004 - the
SPARC Product Directory published an article -
Why Sun Should
Acquire an SSD Company - which argued that integrating SSDs into Sun's
Solaris OS and servers would result in the fastest database servers and more
than make up for speed deficiencies in its SPARC processors."|
|In November 2006
- Microsoft announced business availability of its new Vista operating
system - loudly heralded as being the first PC market OS to include SSD-aware
support and native SSD cache management.
Vista (whether for SSDs or HDDs) proved to be so good that for
years after its launch millions of professional pc users upgraded back to XP.
|"In August 2007 -
EasyCo launched its "Managed Flash Technology" software to
enable enterprise grade RAID-5 arrays built from consumer grade flash
SSDs. MFT boosted SSD writes while also improving endurance..."|
SSD history - 2007
|"In September 2009
- Dataram launched the XcelaSAN - a fast 2U rackmount
SSD ASAP (auto
accelerating appliance) which automatically identified hotspots to relocate
critical data. The company said the XcelaSAN would automatically learn and self
optimize during the 1st few hours of operation..."|
SSD history - 2009
|In November 2009 -
Google opened its doors to developers who wanted to work with
Chrome OS - a new operating
system for tablets.|
In the opening video of the
OS blog we learned that the architects of the new OS were "obsessed
with speed". And the new netbook OS was designed from the ground up to
support only flash
SSDs as the default mass storage.
Google said - there is no room
in this OS for outmoded 50 year old
hard disk technology.
|how fast can your
SSD run backwards?|
|SSDs are complex devices and there's a
lot of mysterious behavior which isn't fully revealed by benchmarks, datasheets
and whitepapers. |
Underlying all the important aspects of SSD behavior
which arise from the intrinsic technologies and architecture inside the SSD.