in San Jose, CA, was founded in 2012 as a result of more than 5 years of
advanced research and development.
The company has assembled an elite team of highly skilled and
experienced engineers in hardware, firmware, software and system design
engineering and management with proven track records of success in
supercomputing and HPC environments. Every member of the A3CUBE team has
deep-domain expertise in complex hardware-firmware product development, testing
and commercialization. All A3CUBE design and development activities are internal
to the company in order to maintain complete control of all aspects of the
A3CUBE's news page,
- mentions on StorageSearch.com
A3CUBE and memory fabrics....
January 10, 2017 - When A3CUBE
started talking about big memory fabrics - there weren't too many other choices
Now in 2017 the
SSD and SCM news pages are
awash with announcements about big memory systems. And growing industry support
for NVMe over Fabric was one of the big market developments in
already seeing signs of clear fragmentation in the memory fabric market
(mostly via server based interface expansion preferences such as PCIe, IB and
GbE but some of the memory applications are also being cannibalized by tiered
memory, new semiconductor memory solutions and DIMM wars.)
context it was interesting to see a recent video (January 2017) from A3CUBE
which shows how their PCIe connected shared memory fabric can work with NVMe
Click on the
link or image below
to see more.
in SSD? - A3CUBE|
by Zsolt Kerekes,
editor - StorageSearch.com
- May 1, 2014
A3CUBE designs and builds a fast low latency
shared memory system (with sub microsecond replication and broadcast features)
which connects via the PCIe interface - and which is intended to be used as the
basis of a PCIe fabric for large scale enterprise and server deployments.
says its architecture is scalable to thousands of connected servers and
The idea of connecting remote servers to a low latency
shared memory system isn't new in computer architecture. And it wasn't even a
new idea back in 1994 when
Texas Memory Systems
was selling its SAM-2000 8GB fast shared memory which had adapters to
various remote busses such as SBus, VMEbus and HIPPI.
And the idea of
using PCIe as a fabric - instead of Gbe or Infiniband isn't new either.
For example PLX has been
talking about the idea for years - and has been sampling an early access
development system for a single cabinet scale of
enable developers to test out the concepts.
What is new and
different from A3CUBE - however - is that it offers an architecture which
scales from the entry level which you might find in a typical enterprise - right
up to the 10,000 node level of web scale companies - and offers a true
alternative to traditional fabrics.
A3CUBE is one of the rare
companies which has entered the
Top SSD Companies list
within a single quarter of exiting stealth mode or launching their first
Other companies in this exalted category include:-
an early indicator that there is already significant industry interest in
notes added later
You might ask -
how does a memory fabric architecture which inserts latency of around one
microsecond into its requests of external memory not interfere
significantly with legacy applications?
Part of the answer is that DRAM
system latency on server motherboards is different to the raw organic latency
inside a single memory chip.
The causes and boudaries of these "hidden
latencies" are discussed in my blog -
loving reasons for fading out DRAM in the virtual memory slider mix
| A3CUBE unveils PCIe
memory fabric for |
10,000 node-class PCIe SSD architectures
|Editor:- February 25, 2014 -
PCIe SSDs can now
access a true PCIe connected shared memory fabric designed by A3CUBE - which exited stealth today
of their remote shared broadcast memory network -
RONNIEE Express -
which provides 700nS (nanoseconds) raw latency (4 byte message) and which
enables message throughput - via standard PCIe - which is 8x better
Editor's comments:- I spoke to the
recently - who say they intend to make this an affordable mainstream
The idea of using PCIe as a fabric to share data at low
latency and with fast throughput across a set of closely located servers
isn't a new one.
The world's leading PCIe chipmaker
PLX started educating
designers and systems architects about these possibilities
few years ago - as a way to elegantly answer a new set of scalability
problems caused by the increasing adoption of PCIe SSDs. These questions
- how do you make this expensive resource available to more servers?
the least year or so - we've seen most of the leading vendors in the enterprise
PCIe SSD market leverage some of the new features in PCIe chips - to
implement high availability SSDs with low latency.
- how do you enable a simple to implement failover mechanism - so that data
remains accessible in the event of either a server or SSD fault?
But although there
are many ways of doing this - the details are different for each vendor.
- until now - if you wanted to share data at PCIe-like latency across a bunch
of PCIe SSDs from different companies - located in different boxes - the
simplest way to do that was to bridge across ethernet or infiniband. - And even
though it has been technically possible with standard software packages - the
integration, education and support issues - compared to legacy SAN or NAS
techniques would be extremely daunting.
That's where A3CUBE comes into
the picture. Their concept is to provide a box which enables any supported PCIe
device to connect to any other - at low latency and with high throughput -
in an architecture which scales to many thousands of nodes.
heart of this is a shared broadcast memory window - of 128Mbytes - which can be
viewed simultaneously by any of the attached ports.
ever used shared remote memory in a supercomputer style of system design at
any time in the past 20 years or so - you'll know that the critical thing is how
the latency grows as you add more ports. So that was one of the questions I
Here's what I was told - "The latency is related to the
dimension of the packet for example: In a real application using a range of
64-256 bytes of messages the 3D torus latency doubled after 1,000 nodes.
With larger packets, the number of nodes to double the latency becomes grater.
But the real point is that the latency of a simple p2p in a standard 10GE is
reached after 29,000 nodes.
"A more clear example of the scalability of the system is this.
Imagine that an application experiences a max latency of 4 us with 64 nodes, now
we want to scale to 1,000 nodes the max latency that the same application
experience will became 4.9 us. 0.9 us of extra latency for 936 more nodes."
Editor again:- Those are very impressive examples - and demonstrates that the
"scalability" is inherent in the original product design.
didn't want to say publicly what the costs of the nodes and the box are at this
stage. But they answered the question a different way.
Their aim is to
price the architecture so that it works out cheaper to run than the legacy
(pre-PCIe SSD era) alternatives - and they're hoping that server oems and fast
SSD oems will find A3CUBE's way of doing this PCIe fabric scalability stuff -
is the ideal way they want to go.
There's a lot more we have to learn
- and a lot of testing to be done and software to be written - but for users
whose nightmare questions have been - how do I easily scale up to a 10,000
PCIe SSD resource - and when I've got it - how can I simplify changing
suppliers? - there's a new safety net being woven. Here are the
|Here are some more SSD articles|
inside the box - explores the many exciting new directions in rackmount
to the fortunes of SSD - why are so many companies piling into the SSD
market - when even the leading enterprise companies haven't demonstrated
sustainable business models yet?
Where are we now
with SSD software? - (And how did we get into this mess?)
7 tips to
survive and thrive in enterprise SSD - In SSDs - rules are made to be
meet Ken - and the
enterprise SSD software event horizon - what will happen as SSD utilization
rates in the enterprise get better? - Consequences for SSD vendors and HDD