click to visit home page
leading the way to the new storage frontier .....
image shows Megabyte sitting in a treasure cache -  click to see the  SSD Buyers Guide
SSD buyers guide ..
read the article on SSD ASAPs
auto tiering SSDs ..
image shows mouse battering down door to cheese store - click for RAM SSDs  directorypage
cloud storage news, vendors, articles  and directory - click here
cloud storage ..
image shows software factory - click to see storage software directory
SSD software ....
SSDs over 163  current & past oems profiled
SSD news ..
Editor:- June 21. 2011 - IO Turbine recently published a set of guidance notes - which illustrates the company's own particular views about the best ways to accelerate virtualized servers with SSDs.
5 Mistakes to Avoid when trying to solve
I/O Bottlenecks in Virtualized Servers
... IO Turbine logo - click for profile

One of the biggest problems in virtualized computing is I/O performance.

Increasingly powerful multi-core servers have the CPU horsepower to host dozens of virtual machines but the I/O capability has not kept pace with this mammoth increase in computing power, and the aggregate demands of all virtual machines pose a bottleneck to unlocking the full potential of consolidation and cost savings promised with server virtualization.

Flash-based solid state disk provides an inflection point in server storage with the potential to solve the I/O bottleneck issue for virtualized infrastructures. But to take full advantage of the performance potential of SSDs, a different approach to I/O is required.

To date, the cost of using SSDs has been too high and the utilization rate too low to make SSDs economically viable in virtualized environments. Companies have pursued a laundry list of techniques without success, searching for a method to simply and efficiently deploy SSDs in VMware and other virtual infrastructures.

Here are 5 common mistakes to avoid when trying to solve virtualization-imposed I/O bottlenecks:

1. Using locally attached Flash SSDs as block devices to provide high IOPS storage

Using flash SSDs in this configuration loses vMotion functionality and the ability to migrate live virtual machines. It also requires management-intensive sizing of SSDs for each virtual machine and a reconfiguration of application and primary storage. Flash capacity is wasted as VMs start and stop when dedicated to a specific machine and requires a lot of manual intervention each time you deploy more SSDs.

2. Adding high-performance hard disk drives and extra spindles to increase IOPS

This option requires some or all drives in the storage array to be upgraded to expensive 15K Fibre Channel or SAS drives. Additionally, hard disk arrays are typically over-provisioned with unwanted capacity in an attempt to wring out extra IOPS, making it a poor choice in terms of return on investment to solve I/O bottlenecks.

3. Adding Flash as special primary storage using an SSD array

This option also requires the intervention of storage administrators to specify which applications use SSD storage and then tuning the system to take advantage of the SSD performance potential. I/O performance improvements can easily be offset by the higher administrative overhead to make the system run properly. While many enterprise-class storage providers offer automatic tiering with data migration to and from the SSD storage, these solutions typically take place well after the need for the I/O acceleration has passed. Implementing SSD within the primary storage array does not eliminate the network latency incurred going to and from the application and SAN or NAS storage and thus negates the low latency advantage of SSDs.

4. Not Running I/O Intensive Applications on a Virtualized Host

One sure way to avoid I/O bottlenecks in virtualized environments is to restrict the applications that run on virtual machines to those that dont stress the I/O limits of the system. This again adds to management overhead by forcing administrators to carefully evaluate applications based on their I/O requirements and offsets the server consolidation and cost savings generated by server virtualization.

5. Adding More Fibre Channel SAN or IP Bandwidth

This increases hardware costs without solving the problem. Increasing bandwidth between virtual machines and storage doesnt make the data pipe faster. It is of limited value without also increasing IOPS at the storage layer. This approach only addresses part of the problem with all of the cost.

The virtualized server I/O problem will only get more acute as enterprises continue to pack more virtual machines on servers with faster processors with higher core counts. Flash SSD has largely bridged the historical performance gap between processor and storage speed, but VMware and other virtualization software solutions have yet to incorporate tools and techniques to enable virtual machines efficiently and easily incorporate SSD without manual intervention into the workflow and allow I/O-intensive applications to be deployed without recreating traditional bottlenecks.

IO Turbine believes that what is needed is a different approach to the I/O bottleneck problem, a new class of solutions that fully leverage the I/O potential of flash SSD without the need to overprovision storage or add more hardware to increase network bandwidth while enabling the CAPEX and OPEX benefits of server virtualization on mission critical applications.
SSD ad - click for more info
Editor's comments:- IO Turbine's analysis above doesn't differentiate between the different types of solutions available to systems architects when they they are dealing with new installations rather than legacy systems.

While it makes several good points - many of these problems (and their solutions) are well known in the SSD market and are amenable to a range of (different) tactical solutions. There isn't a single one size fits all solution - and there is unlikely to ever be one.

The SSD ASAPs industry - the market segment in which IO Turbine operates is working on solutions which solve the "problem #5" above - to provide timely tiering.

This market will get very complex and users will have to make decisions about who they trust with their application acceleration roadmaps which could backfire badly if they make the wrong choice of SSD partner.

Helping to shortlist and predict future SSD company winners is the reason that started publishing the top SSD companies list 4 years ago. Nowadays this tracks millions of SSD searches in each quarter and helps you see what other readers in this market think is important.

For more articles about key issues in the SSD market suggested by thought leaders in the SSD market take a look at the SSD Bookmarks.

storage search banner

"how big will the SSD market be when SSDs replace hard drives? - and some other questions answered too - like why did Fusion-io's sales crash?"
the enterprise SSD software event horizon
SSD ad - click for more info
"...Our virtual desktops can match and even exceed the speed of high-performance physical desktops."
...Chris Featherstone, CTO, V3 Systems - in a July 2011 blog about the advantages of using PCIe SSDs from Fusion-io in VDI environments.
the Problem with Write IOPS in flash SSDs
the "play it again Sam" syndrome

Flash SSD "random write IOPS" are now similar to "read IOPS" in many of the fastest SSDs.

So why are they such a poor predictor of application performance?

And why are users still buying RAM SSDs which cost an order of magnitude more than SLC? (let alone MLC) - even when the IOPS specs look similar.
the problem with flash SSD  write IOPS This article tells you why the specs got faster - but the applications didn't. And why competing SSDs with apparently identical benchmark results can perform completely differently. the article
Who makes the fastest SSDs?
Speed isn't everything, and it comes at a price.........
But if you do need the speediest SSD (chip, card, module or rackmount) then wading through the web sites of hundreds of SSD oems to shortlist products slows you down.

And the SSD search problem will get even worse as we head towards a market with over 1,000 SSD oems.
the fastest SSDs  sorted by interface and form factor - click to read article ... Relax - I've done the research. And this whizzy wish list is updated daily from storage news and direct contacts from oems. the article,
StorageSearch talks to SSD leaders...

Fusion-io's CEO - re MLC in banks.

Over 80% of the SSDs that Fusion-io has sold in the last couple of years have been MLC rather than SLC - and David Flynn thinks that they probably have a bigger base of enterprise MLC SSDs which has been operating longer in customer sites (upto 3 years) than any other company. the article
the editor enjoys another conversation with SSD movers and shakers
Violin's CEO - re Oracle acceleration market.

I asked about how Violin's business was doing - and in particular were they seeing any drop-off in RAM SSDs? Don Basile said - on the contrary that their RAM based appliance business was growing - and overall their SSD business was growing faster than anything he had seen in previous companies. In the enterprise SSD acceleration market they said they thought the next few years would see a lot of winners. the article

Nimbus's CEO - re the design of its NAS SSDs.

The 1st question I asked was about the storage blades. I had already guessed (and he confirmed) the interface was SAS. But the surprise came when I asked whose SSDs was he using? Thomas Isakovich said Nimbus makes its own SSDs ... the article

Texas Memory Systems - re MLC and RAM SSDs.

Jamon Bowen said current consumer grade MLC nand flash has endurance on the order of 3,000 write cycles. ... And the company's burn-in process (done for QA as part of manufacturing) would use up 10% of the endurance life before the SSD even reached the customer!

In many bank applications RAM SSDs are actually cheaper than flash - because of the small size of the data. the article
. 1.0" SSDs 1.8" SSDs 2.5" SSDs 3.5" SSDs (c)PCI(e) SSDs rackmount SSDs

STORAGEsearch is published by ACSL