the SSD Buyers Guide - click to see article
.. SSD buyers guide ..
Notebook SSDs overview article - This is a troubled and complex segment of the SSD market.
Notebook SSDs ..
click to see the SSD  Bookmarks  from Solid Access
SSD Bookmarks ..

this is the home page and site logo  of - leading the way to the new storage frontier since 1998
"leading the way to the new storage frontier"

editor's notes from storage history. This archived article - from 1999 - is one of the 1st vendor written articles about SSDs which we published on

the Advantages of a 3-D DRAM Architecture for Optimum Cost/Performance

White Paper by Gene Bowles and Duke Lambert

Solid Data
See also:- Solid state disks, articles list, Jukeboxes,
1999 SSD ad which ran on
Performance of relational database management systems has improved steadily over the last 10 years. Today's open-systems database applications are increasingly sophisticated and capable of handling information-processing tasks that were formerly reserved for mainframe computer platforms. This has generated tremendous pressure on the hardware to perform at increasing levels of speed and response times.

Running under high loads, modern relational databases are capable of utilizing 12 or more CPUs while managing thousands of I/O requests per second. The raw speed of the CPU inside the host platforms is keeping up. CPUs have followed Moore's law and doubled in performance every 18 months, representing a ten-year performance improvement of over 100 times. During the same time frame, however, magnetic disk storage has increased in speed by a factor of only three times. This has created in an imbalance in system design that can result in both excessive costs and disappointing performance.

The number one challenge in disk storage performance lies in the inherent mechanical delays caused by the access times and latencies of disk drives. Even with complex striping and partitioning techniques employed to improve throughput, if all I/O requests were serviced by physical magnetic disks alone, service times would render the database completely useless. Accordingly, DRAM, in the form of main memory and disk cache, is frequently utilized to have the data of interest available quickly to the database engine. Running a relational database on a system that had no DRAM and used only magnetic disk to store data would be completely ineffective. Thus, the key to optimized data performance is creating a systems architecture that can assure the highest chances that the data of interest is being read from or written to DRAM — and not magnetic disk.

This paper will present and discuss the advantage of creating a third dimension in the DRAM architecture of systems that support relational databases (RDB) — in order to increase overall performance and reduce system cost. The paper will define both 2-Dimensional (2-D) and 3-Dimensional (3-D) DRAM architectures, with full discussions on the merits of each environment. Common misconceptions and the costs of implementing both environments will be broken down and explored.

The objective of this paper is to outline a 3-D strategy for system DRAM deployment using SSD that provides an optimized system architecture — that is, one that achieves the highest possible percentage of I/O requests fulfilled directly from DRAM without magnetic disk interaction. This strategy provides new alternatives for configuring a range of systems with superior cost/performance characteristics relative to 2-D architectures.

Characteristics of 2-Dimensional Architectures

Typically, the most common application of DRAM, the first dimension, in most hardware architectures is inside the host system, as main memory. Main memory offers very fast access and very high bandwidth for all data that is located there. However, this memory is completely volatile. If the host system experiences power loss or must be rebooted for any reason, all data in main memory will be lost and must be recreated. Because of volatility, main memory is used only for storing data being read and for write buffers waiting to be flushed to disk. Primary uses are buffers, global cache, named cache and O/S management. Often, a high performance system configuration will have 1-4GB of DRAM configured in main memory.

The second dimension of DRAM deployment most often consists of controller-based disk cache, as an I/O buffer to the disks and/or RAID arrays. DRAM mounted on the disk controller is moderately fast (slower then main memory due to necessary directories for cache content look up) and in most cases non-volatile, having battery and magnetic disk backup to protect data in the event of power loss. Controller-based DRAM is used for write buffering, read-ahead cache, LRU/MRU algorithms, and RAID parity calculations. Performance is based upon the ability of the algorithms to keep the data of interest in the cache area so it does not have to be retrieved from magnetic disk drives. Typical disk controller cache sizes run from a single MB on a standalone disk drive, up to 256 MB on open-systems RAID arrays, and as much as 4 GB on mainframe storage systems.

This system design is referred to as having two dimensions. 2-D systems are simple to design, easy to understand and straightforward to implement. However, in actual usage they often fall far short of achieving the required performance levels to support the RDB server. The reason for this is that they depend upon the data caching algorithms for performance. These systems are "statistically" fast; that is, they depend on the probability that the data is in the cache when needed by the CPU. Otherwise, the system must wait for the disk drives to retrieve the data, subjecting the system inherent mechanical delays due to access times, latencies, and data throughput rates.

The diagram, fig 1, illustrates this design and shows that while there is a small area of functional overlap, each area of DRAM utilized has specific tasks for which it is best suited.

In a typical 2-Dimensional DRAM architecture, the main memory and disk cache perform largely separate and complementary roles in enhancing overall data I/O performance and system response time.

Figure 1

2-D Performance Enhancement with Data Striping

Within 2-D systems there are hidden cost which can be quite high. These costs manifest themselves in several ways. The algorithms that are used to manage the two dynamic cache areas in 2-D systems are often not effective enough to provide sufficiently high cache hit ratios. Based upon testing performed on hundreds of different host system configurations, we know that data can be delivered from either main memory or disk cache DRAM fast enough to saturate the server engine. When the server is waiting on I/O it is because physical disk access is being required. The latency and seek times of the disk drives themselves are the root cause of the I/O troubles. Systems where the desired data is always in cache do not experience I/O bottlenecks. Therefore, I/O troubles indicate that physical drives are being accessed. That, in turn, indicates there is an insufficient cache hit/miss ratio.

When cache hit ratios are low, the next step most System Administrators take is to begin spreading the data over many disk spindles in order to minimize contention and increase aggregate throughput. Sophisticated techniques involving various levels of striping and mirroring are employed to minimize the impact of the cache miss. These techniques come with significant penalties. Complicated stripe sets can require days or even weeks of careful data analysis and layout. Once the stripe is configured, it will not be easy to determine where the logical data segment is physically located. This further complicates follow on tuning since locating the particular data segment that is creating a performance penalty may be difficult or impossible. As system data storage requirements increase, the data analyst will be forced to completely re-stripe data sets, a challenging and time-consuming process.

Diminishing Returns from Additional DRAM Investments

Savvy systems administrators know that the penalty for a cache miss is high. Therefore, systems should be configured with very large amounts of both system memory and disk cache. However, Pareto's law applies: the initial 20% of the memory will handle 80% of the I/O activity. Beyond that point, as memory is added, the resulting performance advantage declines. A system configured with 1GB of system memory may receive a 20% improvement by adding another 1 GB of memory. Adding a third GB will add only 15% more and adding a fourth GB may result in only 10% improvement. In each case, the cost of the 1 GB of memory was the same, so the net value received for dollars invested declines dramatically as the total amount of memory increases. The effect of this is that a 2-D architecture for high performance is very expensive since the net performance gain for each dollar invested is declining; yet the system administrator must highly configure the system to achieve acceptable performance.

The curve of diminishing returns show that an optimum system design is one where no DRAM dimension is configured with capacity past the point of maximum value. In a system constrained by having only 2-Dimensions it is often necessary to be far out on the performance curve in order to achieve the level of system performance required by the users. The result of this is that the costs of configuring the system are driven up and up, while relatively small performance gains are experienced by the system users.

In a 2-Dimensional DRAM architecture, the investments made in main memory and disk cache ultimately deliver diminishing returns for incremental additional investments. This becomes a limiting factor in overall system performance capability.

Figure 2

3-Dimensional Architecture

3-Dimensional architectures add a third area for DRAM. Using intelligent Solid-State Disk (SSD) systems for the third dimension creates a third area of DRAM that has unique strengths. The SSD is characterized by very fast performance with complete non-volatility and the ability to permanently locate data on the DRAM. All access of the DRAM is performed via direct addressing logic allowing data to be accessed consistently in 18 microseconds.

Solid-State Disk systems provide an additional dimension for improving performance via DRAM. SSD performs functions that are separate from and complementary to main memory and disk cache

Figure 3

3-D architecture takes advantage of the I/O characteristics of the relational database. With all relational databases, a relatively small amount of data receives the vast majority of I/O requests. Many studies have been conducted on this subject and we have monitored the I/O performance characteristics of dozens of systems ourselves. The results show that 3-5% of the data typically services over half of the I/O requests. This is represented in the two graphs below. A small percentage of the data is receiving and processing a majority of the data requests.

In database applications, it is common for the "hot files" to comprise a small minority of the storage capacity (3-5%), while constituting the majority of the I/O activity.

Figure 4

The third dimension is used to systematically place the "hot" data spaces in DRAM. By permanently caching this data, it is possible to move over half of the total physical I/O load to the SSD DRAM area. The remaining 40-50% of physical I/O activity leaves a light load for the disk cache to handle. As a result, the majority of the physical I/O activity is serviced in under .5 ms, and most of the remaining 30% is serviced at disk cache speeds (approximately 4 ms). Figure 5 below shows the effect changing disk cache hit ratios have on disk service time and compares them to SSD service time.

SSD is "predictably" fast, in that data dedicated to SSD files is always serviced in fractions of a millisecond. On average, data serviced by disk cache is much slower, depending statistically on the average "hit rate" of the cache algorithm.

Figure 5

A properly designed 3-D system will offer much higher performance. Reducing average service times by three-fold yields very positive results in the performance of the RDB. The "hot" data spaces tend to be the ones the online users or off-hours batch jobs are depending upon for their next I/O request. Eliminating wait on this data is one of the desired effects of a 3-D architecture.

DRAM Scalability

2-D architecture is scaleable until the point where adding additional DRAM stops improving the cache hit ratio or there are no additional card slots for DRAM expansion. Following the curve of diminishing returns we outlined above, in most cases the cache hit ratio stops improving long before the system has been fully configured with memory. As the load increases the 2-D system becomes increasing expensive and difficult to maintain for two reasons:

  1. Main memory and disk cache DRAM configurations grow beyond the optimum point of cost effectiveness. More dollars are spent initially buying hardware and less and less return is realized in performance as the configuration moves out on the value curve.

2. As the DRAM cache effectiveness falls off, the system administrator is forced to create complex data stripes inside the disk array in an attempt to ward off the performance impact of physical disk access. The cost of this striping is significant both in time delays prior to deployment and in ongoing support. Optimal disk striping is typically specific to one particular type of operation. For instance, in a data warehouse it is not possible to optimize the stripe for both data load and for random access. Compromises must occur which increase cost by reducing effectiveness.

Thus, while there may be theoretical scalability remaining in the 2-D architecture, in practice the limit is reached prior to the limit of the physical capacity. For the host this typically means 2-4 GB of DRAM and for the disk cache from 256 MB to 2 GB. For the majority of RDBs, expanding the DRAM in 2-D architectures beyond these capacities will increase cost but will not significantly increase performance.

When SSD is first added to the architecture, the performance return on additional DRAM investment is large. Moreover, the addition of SSD provides a new dimension of DRAM for increasing system cost and/or lowering cost per user.

Figure 6

3-D systems offer infinite scalability in I/O performance. In 3-D, it is possible to configure as much SSD DRAM as desired and it is possible to physically locate specific data on that DRAM. I/O performance then becomes a matter of placing the frequently used data on the SSD portion of the 3-D architecture and leaving the balance of the data on the magnetic disks where it will be cached by the disk cache DRAM.

Using 3-D in this manner enables the system designer to achieve maximum cost efficiency. Allocating enough DRAM in each of the 3-Dimensions to reach, but not exceed the point of diminishing returns. It is possible in this manner to design a 3-D system that has equivalent cost but offers significantly greater performance then a 2-D system.

With a 3-D architecture, the data files with the most I/O activity are dedicated to the SSD system, which does not have the mechanical access and latency delays of disk drives. The result is a level of system performance unachievable with a 2-Dimplementation of the same hardware platform.

Figure 7

It is also possible to design for cost savings using 3-D architecture. As the effectiveness of the 2-D system diminishes, the cost per unit of performance increases. Since the third dimension adds a segment of DRAM that was not configured in the 2-D system, there are very large performance gains for a small incremental cost.

A balanced 3-D system provides the ability to configure a system for additional performance at the same cost or for equivalent performance at a lower cost.

The 3-D architecture, using SSD for hot files, can deliver greater performance for the same cost or the same performance at lower cost – compared to systems without SSD.

Figure 8

In summary, 2-D architectures offer simple design and ease of use for systems that do not face demanding loads. When 2-D systems are stressed to support RDBs that are being used for heavily loaded applications supporting significant workloads, they lose their cost effectiveness and simplicity.

3-Dimensional architectures provide a much simpler design that is infinitely scaleable. The support costs are reduced since complex striping is greatly reduced or eliminated completely. Administration is easier and more cost effective because extremely complex data layouts are not required. Assuring that the data of primary concern is always located in cache provides optimal performance and prevents over-configuring the system in other areas. Data warehouses and other VLDB systems no longer need to worry about exceeding the effectiveness of their disk cache. Placing the "hot" areas (indexes, data marts, temporary) on SSD in the third dimension assures, the RDB will operate efficiently because the critical data will always be accessed at DRAM speeds (<.5 ms).

As shown in Figure 9 below, 3-D architecture allows a new range of performance options that increases performance, lowers cost and expands scalability. These features are particularly beneficial in corporate environments that have standardized on a single server platform for the majority of their RDB applications and requirements.

Figure 9

The third dimension of DRAM, comprised of SSD storage, provides new alternatives for configuring a range of systems with superior cost/performance characteristics.


DRAM has traditionally been deployed in open-systems hardware platforms in two dimensions – main memory and disk cache. Solid-State Disk systems offer a third dimension for utilizing DRAM to increase the performance and lower the cost of relational database applications.

SSD provides functionality that is different from, and complementary to, main memory and disk cache. Specifically, by dedicating strategic "hot files" to SSD, the system designer can insure that the highest activity data in the application will always be read from or written to DRAM, not magnetic disk.

As a result, systems incorporating 3-D DRAM architecture are frequently faster and more cost-effective than 2-D systems. By incorporating SSD architecturally, system integrators will be able to configure a new range of alternatives that deliver superior cost/performance characteristics relative to traditional approaches.

Solid Data Systems - address and links

Solid Data Systems (formerly DES)
2945 Oakmead Village Court
Santa Clara, CA 95051
tel:- +1 408 727 5497
fax:- +1 408 727 5496

solid state disks
See also:- Solid state disks on

storage search banner

home page & search engine storage manufacturers storage products add url articles storage resellers
STORAGEsearch is published by ACSL