This post is a follow up to last week’s one on Dell Technologies PowerStore, which was featured – among other products from the vendor’s portfolio – as part of Tech Field Day’s special two days event. Here we will focus on VxRail and I will briefly go through a few things that raised my interest, once again with a specific emphasis on its integration with the VMware stack. As usual, if you want to learn more, my recommendation is to head to the TFD event page where you can find plenty of videos, including demos, where the solution and its capabilities are presented in detail.

More »

In my previous post on Tanzu, I explained how easy is to start consuming Kubernetes workloads from within vSphere 7, thanks to the newly introduced “vSphere with Tanzu”.

As discussed, this comes with some limitations but at the same time enables customers to deploy and consume modern apps on a tried and tested platform without the need to invest into more advanced technologies like VSAN or NSX-T.

For those who are ready to take a significant leap and have a richer, more complete experience, then the way to go is “vSphere with Kubernetes”.

More »

I came across Aviatrix for the first time a few months ago, while I was knee-deep in the preparation of AWS Associate Exams and at the same time researching for a cloud migration project. AWS networking was a major topic of the exams and also an important research area for my assignment at work. It was very clear to me from the very beginning that Cloud Networking is inherently different from traditional networking. Of course, they share the very same foundations but designing and managing networks in any Public Cloud is a very different business than doing the same in your Data Center. In the Cloud there are no routers or switches you can log into, there are no console cables nor SFP connectors, but you have VPCs that you can literally spin up with a few lines of code with all their bells and whistles (including security policies for the workloads they contain).

This implies a few considerations. First and foremost, the expectations of Cloud Engineers are very different from those of Network Engineers: Cloud Engineers can set up VPCs in minutes but they can be easily frustrated by their on-prem Network counterparts lagging weeks behind to provide VPN connectivity and BGP route distribution to the Data Center. Then there is the skills gap to be filled: Cloud Engineering Teams are usually small and manned by all-round technologists rather than specialists, very often there is no Network Guru in Cloud Teams capable of citing RFCs by memory, so there is a need to keep things simple, yet they must work “as they should”. Finally, in Public Clouds is very easy to lose control and become victims of the VPC sprawl; managing Cloud Networking at scale is probably the biggest challenges of all.

More »

I have written a couple of posts (here and here) about Datrium around Tech Field Day 14 back in May;  at that time I was intrigued by their fresh and unusual approach to resolve the challenges associated with both the traditional “non-converged” and the “hyper-converged” infrastructure philosophies, but at the very same time I expressed my concerns about the maturity of their solution. I was eager to see their promising technology mature and today I am very pleased to acknowledge the efforts that Datrium have been making since that day, away from the glamour of the spotlight. Datrium just made a big push and only three months after their TFD showcase they introduced not one but two major technology updates.

More »

A few years after their introduction, HyperConverged systems are now a reality, slowly but steadily eroding market shares from the traditional server/array platforms. They promised scalability, ease of deployment, operational and architectural simplification. While they mostly delivered on those promises, HCI systems introduced some new limitations and pain points. Probably the most relevant is a consequence of the uniqueness of HCI systems’ architecture where multiple identical nodes – each one providing compute, storage and data services – are pooled together. This induces an inter-dependency between them, as VM data must be available on different nodes at the same time to guarantee resiliency in case of failure. Consequently, HCI nodes are not stateless: inter-node, east-west communications are required to guarantee that data resiliency policies are applied. Unfortunately, this statefulness also has other consequences: when a node is brought down, either by a fault or because of a planned maintenance task, so is the storage that comes with it and data must be rebuilt or relocated to ensure continued operations.

More »

ClearSky is a Boston-based startup founded in 2014 by industry veterans Lazarus Vekiarides and Ellen Rubin; ClearSky comes with a unique proposition, which – if successful – might revolutionize the way primary storage is consumed. I introduced ClearSky in my previous TFD14 preview article where I described their solution; the objective is to reduce drastically the Data Center footprint of traditional primary storage by shifting it to the Cloud while at the same time simplifying DR operations and ensuring accessibility of data from any location. This outcome seemed to be impossible to achieve due to the strict latency requirements that primary storage inherently carries, but ClearSky has found an elegant and effective solution to this conundrum. However, there is one caveat here and it will be evident in the following paragraph.

More »

This article is a follow up to my TFD14 Turbonomic preview; at that time I knew very little about Turbonomic and that post was a collection of thoughts and impressions I gathered looking at the product from a distance. I am happy to say that after the TFD presentation, my understanding of the solution is clearer and the initial good impressions are confirmed.

Turbonomic is – in their own words – an “Autonomic Platform”; the play on words here is the merge between Automation and Economy, that is because Turbonomic uses the “Supply Chain” metaphor, where every element in the infrastructure “buys” resources from the underlying components and “sells” upstream, leveraging at the same time automation to ensure that the apps are always performing in their “Desired State”.

The objective is to “assure the applications performance” regardless of where the app is running (in the Private, Public or Hybrid Cloud). Coming from an operations background I know well how difficult it is to keep an infrastructure running within ideal parameters: any single intervention – no matter how apparently insignificant – leads to an imbalance in the infrastructure and this, in turn, leads to a deviation from those optimal parameters. What happens is that app performances are less predictable and corrective actions must be taken to return to the “Desired State”. This is what is called the “Break-Fix” loop, which requires continuous human intervention.

More »

Next week I will fly to Boston to attend my first full Tech Field Day conference as a delegate.

Last year I was lucky enough to be invited to the smaller scale Tech Field Day Extra event at VMworld Europe and I really enjoyed the experience, so you can imagine my excitement when I received the invite from Stephen and Tom to join them at TFD14.

Not yet time to pack a bag, but definitely time to start doing some research on the three vendors that will present at TFD14: ClearSky Data, Turbonomic and Datrium.

Let’s start this TFD14 Preview Series from ClearSky Data: from what I understand ClearSky has a very unusual approach to Cloud Storage which is normally intended for secondary/object storage kind of use cases. ClearSky has developed an interesting architecture that allows for storing all your data in the cloud while not sacrificing the performance (and use cases) typical of primary storage – all of this going beyond the obvious caching technologies that have been around for some time. More »