In this third and last post of this Tech Field Day 14 Preview Series, I will focus on Datrium. Truth to be told, a fourth vendor was added at the last minute to the list of TFD14 presenters, and that is NetApp; interestingly enough their presentation will be DevOps oriented and I will report my impressions in a future post when I am back from Boston.

Back to Datrium then. Like myself, this will be the first appearance of Datrium at Tech Field Day, so there was no “TFD prior art” in the form of old presentation recordings I could leverage to get myself acquainted with their solutions, therefore my research was limited to browsing their company website. I hope I got everything right, but I can tell you what I found there was enough to tickle my curiosity; they seem to have an interesting approach to the solution of the converged data center problem, their own buzzword to define this is “Open Convergence”. What I see there is a mix of ideas already heard of before, but even if the ingredients are familiar, the recipe is different and the serving looks yummy! Enough with the gastronomic analogy, let’s talk tech.

Datrium flagship product is called DVX and they label it as a “System for Rackscale Convergence”. If I had to quickly (and unfairly) put it into a box and label it, I would say this technology would sit somewhere between commodity HCI (VSAN, Nutanix, Simplivity etc.) and IO acceleration / SAN decoupling solutions (PernixData). This description is indeed unfair, because Datrium does it definitely differently but it will help you understand.

DVX is made of three pieces, one software and two hardware. The first piece of the jigsaw are the DVX compute nodes: they are Datrium branded x86 servers coming with VMware vSphere pre-installed and a bunch of Flash drives that will provide up to 13.4 TB of local RAW read only cache capacity. This is where all your read IO magic happens. Then you have the the DVX Data node where your data actually resides: this is a fully redundant disk array that contains 12x 4TB slow-performance, high-capacity disks. Datrium claims that by means of effective data reduction algorithms, there can be up to 6x disk utilization savings and an effective capacity ranging between 60 and 180 TB. The Data Node is connected to the compute nodes by means of 2x 10 Gb Ethernet links and will host your VMs. Finally, there’s the SW part, in the form of the Hyperdriver: this software component intercepts IO and uses a minimum of 2 host CPU cores and up to 20% of total CPU capacity if hosts has more than 10 cores. If this is not enough, “Insane Mode” can be enabled to reserve 40% of the host compute power to service IO through the local cache.

What happens here is that any host contributes to the cluster IOPS capacity, rather than competing for storage array resources. Adding nodes scales out both compute and storage performance, but not capacity as that is contained into the Data Node (of course you can start with just a few disks and add more when further capacity is needed). The node limit is due by vSphere and of course is 32; I am not sure if more than one Data Node can be presented to a vSphere/DVX cluster, that’s a good question to be asked in Boston as this might represent a scalability limit.

The interesting bit here is that the DVX compute nodes are not mandatory and any server could be used (as long as it is in VMware HCL and fulfils DVX minimum requirements). This would allow for mixed, brownfield deployments or to be allowed to select more powerful hardware than the Datrium one if needed. Nice touch!

So, going back to my initial solution positioning: DVX can be considered an HCI solution because it removes the need for a SAN (the Data Node is not an array in the classic meaning, no need for dedicated storage network or fabric) but it is different than a pure HCI solution because data is not spread/duplicated/protected across multiple hosts but it will always be in one location (the Data Node), so no need to rebuild data across hosts when one is unavailable. At the same time DVX looks similar to acceleration/caching technologies like PernixData, but it doesn’t accelerate IOs or decouple them from the SAN simply because it doesn’t require one!

DVX also comes with the option to enable encryption – they call their implementation “Blanket Encryption” and I suppose this means a end-to-end protection of your data. Finally, there is a companion product called Data Cloud Foundation which allows for granular management of data stored on DVX Data Nodes allowing for object-level snapshots and replication. More from Boston, I guess.

There is enough here to make me want to learn more, so follow me (and my fellow delegates) next May 11th and 12th when all the vendor presentations will be streamed live on the Tech Field Day website. Also send us your questions on Twitter using the #TFD14 hashtag, we’ll do our best to have them answered.

 

If you like this post, please share it!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.